00:00:00.001 Started by upstream project "autotest-per-patch" build number 122821 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.090 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.092 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.124 Fetching changes from the remote Git repository 00:00:00.126 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.161 Using shallow fetch with depth 1 00:00:00.161 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.161 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.185 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.186 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.186 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.815 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.829 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.841 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:05.841 > git config core.sparsecheckout # timeout=10 00:00:05.851 > git read-tree -mu HEAD # timeout=10 00:00:05.880 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:05.930 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:05.931 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:06.076 [Pipeline] Start of Pipeline 00:00:06.086 [Pipeline] library 00:00:06.086 Loading library shm_lib@master 00:00:06.087 Library shm_lib@master is cached. Copying from home. 00:00:06.096 [Pipeline] node 00:00:06.101 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:06.102 [Pipeline] { 00:00:06.110 [Pipeline] catchError 00:00:06.110 [Pipeline] { 00:00:06.118 [Pipeline] wrap 00:00:06.124 [Pipeline] { 00:00:06.129 [Pipeline] stage 00:00:06.131 [Pipeline] { (Prologue) 00:00:06.144 [Pipeline] echo 00:00:06.145 Node: VM-host-SM9 00:00:06.148 [Pipeline] cleanWs 00:00:06.156 [WS-CLEANUP] Deleting project workspace... 00:00:06.156 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.160 [WS-CLEANUP] done 00:00:06.318 [Pipeline] setCustomBuildProperty 00:00:06.400 [Pipeline] nodesByLabel 00:00:06.401 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.410 [Pipeline] httpRequest 00:00:06.414 HttpMethod: GET 00:00:06.414 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.424 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.470 Response Code: HTTP/1.1 200 OK 00:00:06.471 Success: Status code 200 is in the accepted range: 200,404 00:00:06.472 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:24.255 [Pipeline] sh 00:00:24.537 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:24.556 [Pipeline] httpRequest 00:00:24.560 HttpMethod: GET 00:00:24.561 URL: http://10.211.164.101/packages/spdk_29773365071b8e2775c5fd84455d9767c82e3d56.tar.gz 00:00:24.562 Sending request to url: http://10.211.164.101/packages/spdk_29773365071b8e2775c5fd84455d9767c82e3d56.tar.gz 00:00:24.594 Response Code: HTTP/1.1 200 OK 00:00:24.595 Success: Status code 200 is in the accepted range: 200,404 00:00:24.596 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_29773365071b8e2775c5fd84455d9767c82e3d56.tar.gz 00:01:08.980 [Pipeline] sh 00:01:09.310 + tar --no-same-owner -xf spdk_29773365071b8e2775c5fd84455d9767c82e3d56.tar.gz 00:01:12.622 [Pipeline] sh 00:01:12.901 + git -C spdk log --oneline -n5 00:01:12.901 297733650 nvmf: don't touch subsystem->flags.allow_any_host directly 00:01:12.901 35948d8fa build: rename SPDK_MOCK_SYSCALLS -> SPDK_MOCK_SYMBOLS 00:01:12.901 69872294e nvme: make spdk_nvme_dhchap_get_digest_length() public 00:01:12.901 67ab645cd nvmf/auth: send AUTH_failure1 message 00:01:12.901 c54a29d8f test/nvmf: add auth timeout unit tests 00:01:12.919 [Pipeline] writeFile 00:01:12.933 [Pipeline] sh 00:01:13.263 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:13.277 [Pipeline] sh 00:01:13.552 + cat autorun-spdk.conf 00:01:13.552 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.552 SPDK_TEST_NVMF=1 00:01:13.552 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.552 SPDK_TEST_USDT=1 00:01:13.552 SPDK_TEST_NVMF_MDNS=1 00:01:13.552 SPDK_RUN_UBSAN=1 00:01:13.552 NET_TYPE=virt 00:01:13.552 SPDK_JSONRPC_GO_CLIENT=1 00:01:13.552 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.557 RUN_NIGHTLY=0 00:01:13.560 [Pipeline] } 00:01:13.580 [Pipeline] // stage 00:01:13.596 [Pipeline] stage 00:01:13.598 [Pipeline] { (Run VM) 00:01:13.614 [Pipeline] sh 00:01:13.893 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:13.893 + echo 'Start stage prepare_nvme.sh' 00:01:13.893 Start stage prepare_nvme.sh 00:01:13.893 + [[ -n 4 ]] 00:01:13.893 + disk_prefix=ex4 00:01:13.893 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:13.893 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:13.893 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:13.893 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.893 ++ SPDK_TEST_NVMF=1 00:01:13.893 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.893 ++ SPDK_TEST_USDT=1 00:01:13.893 ++ SPDK_TEST_NVMF_MDNS=1 00:01:13.893 ++ SPDK_RUN_UBSAN=1 00:01:13.893 ++ NET_TYPE=virt 00:01:13.893 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:13.893 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.893 ++ RUN_NIGHTLY=0 00:01:13.893 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:13.893 + nvme_files=() 00:01:13.893 + declare -A nvme_files 00:01:13.893 + backend_dir=/var/lib/libvirt/images/backends 00:01:13.894 + nvme_files['nvme.img']=5G 00:01:13.894 + nvme_files['nvme-cmb.img']=5G 00:01:13.894 + nvme_files['nvme-multi0.img']=4G 00:01:13.894 + nvme_files['nvme-multi1.img']=4G 00:01:13.894 + nvme_files['nvme-multi2.img']=4G 00:01:13.894 + nvme_files['nvme-openstack.img']=8G 00:01:13.894 + nvme_files['nvme-zns.img']=5G 00:01:13.894 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:13.894 + (( SPDK_TEST_FTL == 1 )) 00:01:13.894 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:13.894 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:13.894 + for nvme in "${!nvme_files[@]}" 00:01:13.894 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:13.894 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.894 + for nvme in "${!nvme_files[@]}" 00:01:13.894 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:13.894 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.894 + for nvme in "${!nvme_files[@]}" 00:01:13.894 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:13.894 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:13.894 + for nvme in "${!nvme_files[@]}" 00:01:13.894 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:13.894 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.894 + for nvme in "${!nvme_files[@]}" 00:01:13.894 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:13.894 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.894 + for nvme in "${!nvme_files[@]}" 00:01:13.894 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:13.894 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.894 + for nvme in "${!nvme_files[@]}" 00:01:13.894 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:13.894 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.152 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:14.152 + echo 'End stage prepare_nvme.sh' 00:01:14.152 End stage prepare_nvme.sh 00:01:14.164 [Pipeline] sh 00:01:14.446 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:14.446 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:01:14.446 00:01:14.446 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:14.446 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:14.446 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:14.446 HELP=0 00:01:14.446 DRY_RUN=0 00:01:14.446 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:14.446 NVME_DISKS_TYPE=nvme,nvme, 00:01:14.446 NVME_AUTO_CREATE=0 00:01:14.446 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:14.446 NVME_CMB=,, 00:01:14.446 NVME_PMR=,, 00:01:14.446 NVME_ZNS=,, 00:01:14.446 NVME_MS=,, 00:01:14.446 NVME_FDP=,, 00:01:14.446 SPDK_VAGRANT_DISTRO=fedora38 00:01:14.446 SPDK_VAGRANT_VMCPU=10 00:01:14.446 SPDK_VAGRANT_VMRAM=12288 00:01:14.446 SPDK_VAGRANT_PROVIDER=libvirt 00:01:14.446 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:14.446 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:14.446 SPDK_OPENSTACK_NETWORK=0 00:01:14.446 VAGRANT_PACKAGE_BOX=0 00:01:14.446 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:14.446 FORCE_DISTRO=true 00:01:14.446 VAGRANT_BOX_VERSION= 00:01:14.446 EXTRA_VAGRANTFILES= 00:01:14.446 NIC_MODEL=e1000 00:01:14.446 00:01:14.446 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:14.446 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:17.736 Bringing machine 'default' up with 'libvirt' provider... 00:01:18.305 ==> default: Creating image (snapshot of base box volume). 00:01:18.563 ==> default: Creating domain with the following settings... 00:01:18.563 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715726910_f4d18cb98a86e5c72b34 00:01:18.563 ==> default: -- Domain type: kvm 00:01:18.563 ==> default: -- Cpus: 10 00:01:18.563 ==> default: -- Feature: acpi 00:01:18.563 ==> default: -- Feature: apic 00:01:18.563 ==> default: -- Feature: pae 00:01:18.563 ==> default: -- Memory: 12288M 00:01:18.563 ==> default: -- Memory Backing: hugepages: 00:01:18.563 ==> default: -- Management MAC: 00:01:18.563 ==> default: -- Loader: 00:01:18.563 ==> default: -- Nvram: 00:01:18.563 ==> default: -- Base box: spdk/fedora38 00:01:18.563 ==> default: -- Storage pool: default 00:01:18.563 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715726910_f4d18cb98a86e5c72b34.img (20G) 00:01:18.563 ==> default: -- Volume Cache: default 00:01:18.563 ==> default: -- Kernel: 00:01:18.563 ==> default: -- Initrd: 00:01:18.563 ==> default: -- Graphics Type: vnc 00:01:18.563 ==> default: -- Graphics Port: -1 00:01:18.563 ==> default: -- Graphics IP: 127.0.0.1 00:01:18.563 ==> default: -- Graphics Password: Not defined 00:01:18.563 ==> default: -- Video Type: cirrus 00:01:18.563 ==> default: -- Video VRAM: 9216 00:01:18.563 ==> default: -- Sound Type: 00:01:18.563 ==> default: -- Keymap: en-us 00:01:18.563 ==> default: -- TPM Path: 00:01:18.563 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:18.563 ==> default: -- Command line args: 00:01:18.563 ==> default: -> value=-device, 00:01:18.563 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:18.563 ==> default: -> value=-drive, 00:01:18.563 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:18.563 ==> default: -> value=-device, 00:01:18.563 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.563 ==> default: -> value=-device, 00:01:18.563 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:18.563 ==> default: -> value=-drive, 00:01:18.563 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:18.563 ==> default: -> value=-device, 00:01:18.563 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.563 ==> default: -> value=-drive, 00:01:18.563 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:18.563 ==> default: -> value=-device, 00:01:18.563 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.563 ==> default: -> value=-drive, 00:01:18.563 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:18.563 ==> default: -> value=-device, 00:01:18.563 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.821 ==> default: Creating shared folders metadata... 00:01:18.821 ==> default: Starting domain. 00:01:20.198 ==> default: Waiting for domain to get an IP address... 00:01:42.121 ==> default: Waiting for SSH to become available... 00:01:42.121 ==> default: Configuring and enabling network interfaces... 00:01:45.404 default: SSH address: 192.168.121.154:22 00:01:45.404 default: SSH username: vagrant 00:01:45.404 default: SSH auth method: private key 00:01:47.303 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.455 ==> default: Mounting SSHFS shared folder... 00:01:56.020 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:56.020 ==> default: Checking Mount.. 00:01:57.393 ==> default: Folder Successfully Mounted! 00:01:57.393 ==> default: Running provisioner: file... 00:01:57.960 default: ~/.gitconfig => .gitconfig 00:01:58.527 00:01:58.527 SUCCESS! 00:01:58.527 00:01:58.527 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:58.527 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:58.527 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:58.527 00:01:58.535 [Pipeline] } 00:01:58.553 [Pipeline] // stage 00:01:58.560 [Pipeline] dir 00:01:58.560 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:58.561 [Pipeline] { 00:01:58.572 [Pipeline] catchError 00:01:58.574 [Pipeline] { 00:01:58.587 [Pipeline] sh 00:01:58.865 + vagrant ssh-config --host vagrant+ 00:01:58.865 sed -ne /^Host/,$p 00:01:58.865 + tee ssh_conf 00:02:03.054 Host vagrant 00:02:03.054 HostName 192.168.121.154 00:02:03.054 User vagrant 00:02:03.054 Port 22 00:02:03.054 UserKnownHostsFile /dev/null 00:02:03.054 StrictHostKeyChecking no 00:02:03.054 PasswordAuthentication no 00:02:03.054 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:02:03.054 IdentitiesOnly yes 00:02:03.054 LogLevel FATAL 00:02:03.054 ForwardAgent yes 00:02:03.054 ForwardX11 yes 00:02:03.054 00:02:03.080 [Pipeline] withEnv 00:02:03.082 [Pipeline] { 00:02:03.096 [Pipeline] sh 00:02:03.415 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:03.415 source /etc/os-release 00:02:03.415 [[ -e /image.version ]] && img=$(< /image.version) 00:02:03.415 # Minimal, systemd-like check. 00:02:03.415 if [[ -e /.dockerenv ]]; then 00:02:03.415 # Clear garbage from the node's name: 00:02:03.415 # agt-er_autotest_547-896 -> autotest_547-896 00:02:03.415 # $HOSTNAME is the actual container id 00:02:03.415 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:03.415 if mountpoint -q /etc/hostname; then 00:02:03.415 # We can assume this is a mount from a host where container is running, 00:02:03.415 # so fetch its hostname to easily identify the target swarm worker. 00:02:03.415 container="$(< /etc/hostname) ($agent)" 00:02:03.415 else 00:02:03.415 # Fallback 00:02:03.415 container=$agent 00:02:03.415 fi 00:02:03.415 fi 00:02:03.415 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:03.415 00:02:03.425 [Pipeline] } 00:02:03.443 [Pipeline] // withEnv 00:02:03.450 [Pipeline] setCustomBuildProperty 00:02:03.463 [Pipeline] stage 00:02:03.465 [Pipeline] { (Tests) 00:02:03.482 [Pipeline] sh 00:02:03.759 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:03.773 [Pipeline] timeout 00:02:03.773 Timeout set to expire in 40 min 00:02:03.774 [Pipeline] { 00:02:03.791 [Pipeline] sh 00:02:04.071 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:04.638 HEAD is now at 297733650 nvmf: don't touch subsystem->flags.allow_any_host directly 00:02:04.652 [Pipeline] sh 00:02:04.931 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:05.202 [Pipeline] sh 00:02:05.483 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:05.756 [Pipeline] sh 00:02:06.034 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:02:06.293 ++ readlink -f spdk_repo 00:02:06.293 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:06.293 + [[ -n /home/vagrant/spdk_repo ]] 00:02:06.293 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:06.293 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:06.293 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:06.293 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:06.293 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:06.293 + cd /home/vagrant/spdk_repo 00:02:06.293 + source /etc/os-release 00:02:06.293 ++ NAME='Fedora Linux' 00:02:06.293 ++ VERSION='38 (Cloud Edition)' 00:02:06.293 ++ ID=fedora 00:02:06.293 ++ VERSION_ID=38 00:02:06.293 ++ VERSION_CODENAME= 00:02:06.293 ++ PLATFORM_ID=platform:f38 00:02:06.293 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:06.293 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.293 ++ LOGO=fedora-logo-icon 00:02:06.293 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:06.293 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.293 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:06.293 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.293 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.293 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.293 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:06.293 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.293 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:06.293 ++ SUPPORT_END=2024-05-14 00:02:06.293 ++ VARIANT='Cloud Edition' 00:02:06.293 ++ VARIANT_ID=cloud 00:02:06.293 + uname -a 00:02:06.293 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:06.293 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:06.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:06.552 Hugepages 00:02:06.552 node hugesize free / total 00:02:06.552 node0 1048576kB 0 / 0 00:02:06.552 node0 2048kB 0 / 0 00:02:06.552 00:02:06.552 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:06.812 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:06.812 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:06.812 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:06.812 + rm -f /tmp/spdk-ld-path 00:02:06.812 + source autorun-spdk.conf 00:02:06.812 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.812 ++ SPDK_TEST_NVMF=1 00:02:06.812 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.812 ++ SPDK_TEST_USDT=1 00:02:06.812 ++ SPDK_TEST_NVMF_MDNS=1 00:02:06.812 ++ SPDK_RUN_UBSAN=1 00:02:06.812 ++ NET_TYPE=virt 00:02:06.812 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:06.812 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.812 ++ RUN_NIGHTLY=0 00:02:06.812 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:06.812 + [[ -n '' ]] 00:02:06.812 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:06.812 + for M in /var/spdk/build-*-manifest.txt 00:02:06.812 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:06.812 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.812 + for M in /var/spdk/build-*-manifest.txt 00:02:06.812 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:06.812 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.812 ++ uname 00:02:06.812 + [[ Linux == \L\i\n\u\x ]] 00:02:06.812 + sudo dmesg -T 00:02:06.812 + sudo dmesg --clear 00:02:06.812 + dmesg_pid=5147 00:02:06.812 + [[ Fedora Linux == FreeBSD ]] 00:02:06.812 + sudo dmesg -Tw 00:02:06.812 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.812 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.812 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:06.812 + [[ -x /usr/src/fio-static/fio ]] 00:02:06.812 + export FIO_BIN=/usr/src/fio-static/fio 00:02:06.812 + FIO_BIN=/usr/src/fio-static/fio 00:02:06.812 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:06.812 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:06.812 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:06.812 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.812 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.812 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:06.812 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.812 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.812 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.812 Test configuration: 00:02:06.812 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.812 SPDK_TEST_NVMF=1 00:02:06.812 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.812 SPDK_TEST_USDT=1 00:02:06.812 SPDK_TEST_NVMF_MDNS=1 00:02:06.812 SPDK_RUN_UBSAN=1 00:02:06.812 NET_TYPE=virt 00:02:06.812 SPDK_JSONRPC_GO_CLIENT=1 00:02:06.812 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.071 RUN_NIGHTLY=0 22:49:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:07.071 22:49:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:07.071 22:49:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.071 22:49:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.071 22:49:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.071 22:49:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.071 22:49:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.071 22:49:19 -- paths/export.sh@5 -- $ export PATH 00:02:07.071 22:49:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.071 22:49:19 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:07.071 22:49:19 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:07.071 22:49:19 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715726959.XXXXXX 00:02:07.071 22:49:19 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715726959.9o6YqT 00:02:07.071 22:49:19 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:07.071 22:49:19 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:07.071 22:49:19 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:07.071 22:49:19 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:07.072 22:49:19 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.072 22:49:19 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:07.072 22:49:19 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:07.072 22:49:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.072 22:49:19 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:07.072 22:49:19 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:07.072 22:49:19 -- pm/common@17 -- $ local monitor 00:02:07.072 22:49:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.072 22:49:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.072 22:49:19 -- pm/common@21 -- $ date +%s 00:02:07.072 22:49:19 -- pm/common@25 -- $ sleep 1 00:02:07.072 22:49:19 -- pm/common@21 -- $ date +%s 00:02:07.072 22:49:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715726959 00:02:07.072 22:49:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715726959 00:02:07.072 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715726959_collect-vmstat.pm.log 00:02:07.072 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715726959_collect-cpu-load.pm.log 00:02:08.005 22:49:20 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:08.005 22:49:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.005 22:49:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.005 22:49:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:08.005 22:49:20 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.005 Tue May 14 10:49:20 PM UTC 2024 00:02:08.005 22:49:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.005 v24.05-pre-623-g297733650 00:02:08.005 22:49:20 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:08.005 22:49:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.005 22:49:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.005 22:49:20 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:08.005 22:49:20 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:08.005 22:49:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.005 ************************************ 00:02:08.005 START TEST ubsan 00:02:08.005 ************************************ 00:02:08.005 using ubsan 00:02:08.005 22:49:20 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:08.005 00:02:08.005 real 0m0.000s 00:02:08.005 user 0m0.000s 00:02:08.005 sys 0m0.000s 00:02:08.005 22:49:20 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:08.005 ************************************ 00:02:08.005 END TEST ubsan 00:02:08.005 22:49:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.005 ************************************ 00:02:08.005 22:49:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:08.005 22:49:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.005 22:49:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.005 22:49:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.005 22:49:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.005 22:49:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.005 22:49:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.005 22:49:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.005 22:49:20 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:08.262 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:08.262 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.520 Using 'verbs' RDMA provider 00:02:24.315 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:34.346 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:34.346 go version go1.21.1 linux/amd64 00:02:34.346 Creating mk/config.mk...done. 00:02:34.346 Creating mk/cc.flags.mk...done. 00:02:34.346 Type 'make' to build. 00:02:34.346 22:49:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:34.346 22:49:46 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:34.346 22:49:46 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:34.346 22:49:46 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.346 ************************************ 00:02:34.346 START TEST make 00:02:34.346 ************************************ 00:02:34.346 22:49:46 make -- common/autotest_common.sh@1121 -- $ make -j10 00:02:34.603 make[1]: Nothing to be done for 'all'. 00:02:56.519 The Meson build system 00:02:56.519 Version: 1.3.1 00:02:56.519 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:56.519 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:56.519 Build type: native build 00:02:56.519 Program cat found: YES (/usr/bin/cat) 00:02:56.519 Project name: DPDK 00:02:56.519 Project version: 23.11.0 00:02:56.519 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:56.519 C linker for the host machine: cc ld.bfd 2.39-16 00:02:56.519 Host machine cpu family: x86_64 00:02:56.519 Host machine cpu: x86_64 00:02:56.519 Message: ## Building in Developer Mode ## 00:02:56.519 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:56.519 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:56.519 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:56.519 Program python3 found: YES (/usr/bin/python3) 00:02:56.519 Program cat found: YES (/usr/bin/cat) 00:02:56.519 Compiler for C supports arguments -march=native: YES 00:02:56.519 Checking for size of "void *" : 8 00:02:56.519 Checking for size of "void *" : 8 (cached) 00:02:56.519 Library m found: YES 00:02:56.519 Library numa found: YES 00:02:56.519 Has header "numaif.h" : YES 00:02:56.519 Library fdt found: NO 00:02:56.519 Library execinfo found: NO 00:02:56.519 Has header "execinfo.h" : YES 00:02:56.519 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:56.519 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:56.519 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:56.519 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:56.519 Run-time dependency openssl found: YES 3.0.9 00:02:56.519 Run-time dependency libpcap found: YES 1.10.4 00:02:56.519 Has header "pcap.h" with dependency libpcap: YES 00:02:56.519 Compiler for C supports arguments -Wcast-qual: YES 00:02:56.519 Compiler for C supports arguments -Wdeprecated: YES 00:02:56.519 Compiler for C supports arguments -Wformat: YES 00:02:56.519 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:56.519 Compiler for C supports arguments -Wformat-security: NO 00:02:56.519 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:56.519 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:56.519 Compiler for C supports arguments -Wnested-externs: YES 00:02:56.519 Compiler for C supports arguments -Wold-style-definition: YES 00:02:56.519 Compiler for C supports arguments -Wpointer-arith: YES 00:02:56.519 Compiler for C supports arguments -Wsign-compare: YES 00:02:56.519 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:56.519 Compiler for C supports arguments -Wundef: YES 00:02:56.519 Compiler for C supports arguments -Wwrite-strings: YES 00:02:56.519 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:56.519 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:56.519 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:56.519 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:56.520 Program objdump found: YES (/usr/bin/objdump) 00:02:56.520 Compiler for C supports arguments -mavx512f: YES 00:02:56.520 Checking if "AVX512 checking" compiles: YES 00:02:56.520 Fetching value of define "__SSE4_2__" : 1 00:02:56.520 Fetching value of define "__AES__" : 1 00:02:56.520 Fetching value of define "__AVX__" : 1 00:02:56.520 Fetching value of define "__AVX2__" : 1 00:02:56.520 Fetching value of define "__AVX512BW__" : (undefined) 00:02:56.520 Fetching value of define "__AVX512CD__" : (undefined) 00:02:56.520 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:56.520 Fetching value of define "__AVX512F__" : (undefined) 00:02:56.520 Fetching value of define "__AVX512VL__" : (undefined) 00:02:56.520 Fetching value of define "__PCLMUL__" : 1 00:02:56.520 Fetching value of define "__RDRND__" : 1 00:02:56.520 Fetching value of define "__RDSEED__" : 1 00:02:56.520 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:56.520 Fetching value of define "__znver1__" : (undefined) 00:02:56.520 Fetching value of define "__znver2__" : (undefined) 00:02:56.520 Fetching value of define "__znver3__" : (undefined) 00:02:56.520 Fetching value of define "__znver4__" : (undefined) 00:02:56.520 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:56.520 Message: lib/log: Defining dependency "log" 00:02:56.520 Message: lib/kvargs: Defining dependency "kvargs" 00:02:56.520 Message: lib/telemetry: Defining dependency "telemetry" 00:02:56.520 Checking for function "getentropy" : NO 00:02:56.520 Message: lib/eal: Defining dependency "eal" 00:02:56.520 Message: lib/ring: Defining dependency "ring" 00:02:56.520 Message: lib/rcu: Defining dependency "rcu" 00:02:56.520 Message: lib/mempool: Defining dependency "mempool" 00:02:56.520 Message: lib/mbuf: Defining dependency "mbuf" 00:02:56.520 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:56.520 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:56.520 Compiler for C supports arguments -mpclmul: YES 00:02:56.520 Compiler for C supports arguments -maes: YES 00:02:56.520 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:56.520 Compiler for C supports arguments -mavx512bw: YES 00:02:56.520 Compiler for C supports arguments -mavx512dq: YES 00:02:56.520 Compiler for C supports arguments -mavx512vl: YES 00:02:56.520 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:56.520 Compiler for C supports arguments -mavx2: YES 00:02:56.520 Compiler for C supports arguments -mavx: YES 00:02:56.520 Message: lib/net: Defining dependency "net" 00:02:56.520 Message: lib/meter: Defining dependency "meter" 00:02:56.520 Message: lib/ethdev: Defining dependency "ethdev" 00:02:56.520 Message: lib/pci: Defining dependency "pci" 00:02:56.520 Message: lib/cmdline: Defining dependency "cmdline" 00:02:56.520 Message: lib/hash: Defining dependency "hash" 00:02:56.520 Message: lib/timer: Defining dependency "timer" 00:02:56.520 Message: lib/compressdev: Defining dependency "compressdev" 00:02:56.520 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:56.520 Message: lib/dmadev: Defining dependency "dmadev" 00:02:56.520 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:56.520 Message: lib/power: Defining dependency "power" 00:02:56.520 Message: lib/reorder: Defining dependency "reorder" 00:02:56.520 Message: lib/security: Defining dependency "security" 00:02:56.520 Has header "linux/userfaultfd.h" : YES 00:02:56.520 Has header "linux/vduse.h" : YES 00:02:56.520 Message: lib/vhost: Defining dependency "vhost" 00:02:56.520 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:56.520 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:56.520 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:56.520 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:56.520 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:56.520 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:56.520 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:56.520 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:56.520 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:56.520 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:56.520 Program doxygen found: YES (/usr/bin/doxygen) 00:02:56.520 Configuring doxy-api-html.conf using configuration 00:02:56.520 Configuring doxy-api-man.conf using configuration 00:02:56.520 Program mandb found: YES (/usr/bin/mandb) 00:02:56.520 Program sphinx-build found: NO 00:02:56.520 Configuring rte_build_config.h using configuration 00:02:56.520 Message: 00:02:56.520 ================= 00:02:56.520 Applications Enabled 00:02:56.520 ================= 00:02:56.520 00:02:56.520 apps: 00:02:56.520 00:02:56.520 00:02:56.520 Message: 00:02:56.520 ================= 00:02:56.520 Libraries Enabled 00:02:56.520 ================= 00:02:56.520 00:02:56.520 libs: 00:02:56.520 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:56.520 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:56.520 cryptodev, dmadev, power, reorder, security, vhost, 00:02:56.520 00:02:56.520 Message: 00:02:56.520 =============== 00:02:56.520 Drivers Enabled 00:02:56.520 =============== 00:02:56.520 00:02:56.520 common: 00:02:56.520 00:02:56.520 bus: 00:02:56.520 pci, vdev, 00:02:56.520 mempool: 00:02:56.520 ring, 00:02:56.520 dma: 00:02:56.520 00:02:56.520 net: 00:02:56.520 00:02:56.520 crypto: 00:02:56.520 00:02:56.520 compress: 00:02:56.520 00:02:56.520 vdpa: 00:02:56.520 00:02:56.520 00:02:56.520 Message: 00:02:56.520 ================= 00:02:56.520 Content Skipped 00:02:56.520 ================= 00:02:56.520 00:02:56.520 apps: 00:02:56.520 dumpcap: explicitly disabled via build config 00:02:56.520 graph: explicitly disabled via build config 00:02:56.520 pdump: explicitly disabled via build config 00:02:56.520 proc-info: explicitly disabled via build config 00:02:56.520 test-acl: explicitly disabled via build config 00:02:56.520 test-bbdev: explicitly disabled via build config 00:02:56.520 test-cmdline: explicitly disabled via build config 00:02:56.520 test-compress-perf: explicitly disabled via build config 00:02:56.520 test-crypto-perf: explicitly disabled via build config 00:02:56.520 test-dma-perf: explicitly disabled via build config 00:02:56.520 test-eventdev: explicitly disabled via build config 00:02:56.520 test-fib: explicitly disabled via build config 00:02:56.520 test-flow-perf: explicitly disabled via build config 00:02:56.520 test-gpudev: explicitly disabled via build config 00:02:56.520 test-mldev: explicitly disabled via build config 00:02:56.520 test-pipeline: explicitly disabled via build config 00:02:56.520 test-pmd: explicitly disabled via build config 00:02:56.520 test-regex: explicitly disabled via build config 00:02:56.520 test-sad: explicitly disabled via build config 00:02:56.520 test-security-perf: explicitly disabled via build config 00:02:56.520 00:02:56.520 libs: 00:02:56.520 metrics: explicitly disabled via build config 00:02:56.520 acl: explicitly disabled via build config 00:02:56.520 bbdev: explicitly disabled via build config 00:02:56.520 bitratestats: explicitly disabled via build config 00:02:56.520 bpf: explicitly disabled via build config 00:02:56.520 cfgfile: explicitly disabled via build config 00:02:56.520 distributor: explicitly disabled via build config 00:02:56.520 efd: explicitly disabled via build config 00:02:56.520 eventdev: explicitly disabled via build config 00:02:56.520 dispatcher: explicitly disabled via build config 00:02:56.520 gpudev: explicitly disabled via build config 00:02:56.520 gro: explicitly disabled via build config 00:02:56.520 gso: explicitly disabled via build config 00:02:56.520 ip_frag: explicitly disabled via build config 00:02:56.520 jobstats: explicitly disabled via build config 00:02:56.520 latencystats: explicitly disabled via build config 00:02:56.520 lpm: explicitly disabled via build config 00:02:56.520 member: explicitly disabled via build config 00:02:56.520 pcapng: explicitly disabled via build config 00:02:56.520 rawdev: explicitly disabled via build config 00:02:56.520 regexdev: explicitly disabled via build config 00:02:56.520 mldev: explicitly disabled via build config 00:02:56.520 rib: explicitly disabled via build config 00:02:56.520 sched: explicitly disabled via build config 00:02:56.520 stack: explicitly disabled via build config 00:02:56.520 ipsec: explicitly disabled via build config 00:02:56.520 pdcp: explicitly disabled via build config 00:02:56.520 fib: explicitly disabled via build config 00:02:56.520 port: explicitly disabled via build config 00:02:56.520 pdump: explicitly disabled via build config 00:02:56.520 table: explicitly disabled via build config 00:02:56.520 pipeline: explicitly disabled via build config 00:02:56.520 graph: explicitly disabled via build config 00:02:56.520 node: explicitly disabled via build config 00:02:56.520 00:02:56.520 drivers: 00:02:56.520 common/cpt: not in enabled drivers build config 00:02:56.520 common/dpaax: not in enabled drivers build config 00:02:56.520 common/iavf: not in enabled drivers build config 00:02:56.520 common/idpf: not in enabled drivers build config 00:02:56.520 common/mvep: not in enabled drivers build config 00:02:56.520 common/octeontx: not in enabled drivers build config 00:02:56.520 bus/auxiliary: not in enabled drivers build config 00:02:56.520 bus/cdx: not in enabled drivers build config 00:02:56.520 bus/dpaa: not in enabled drivers build config 00:02:56.520 bus/fslmc: not in enabled drivers build config 00:02:56.520 bus/ifpga: not in enabled drivers build config 00:02:56.520 bus/platform: not in enabled drivers build config 00:02:56.520 bus/vmbus: not in enabled drivers build config 00:02:56.520 common/cnxk: not in enabled drivers build config 00:02:56.520 common/mlx5: not in enabled drivers build config 00:02:56.520 common/nfp: not in enabled drivers build config 00:02:56.520 common/qat: not in enabled drivers build config 00:02:56.520 common/sfc_efx: not in enabled drivers build config 00:02:56.520 mempool/bucket: not in enabled drivers build config 00:02:56.520 mempool/cnxk: not in enabled drivers build config 00:02:56.520 mempool/dpaa: not in enabled drivers build config 00:02:56.520 mempool/dpaa2: not in enabled drivers build config 00:02:56.520 mempool/octeontx: not in enabled drivers build config 00:02:56.520 mempool/stack: not in enabled drivers build config 00:02:56.520 dma/cnxk: not in enabled drivers build config 00:02:56.520 dma/dpaa: not in enabled drivers build config 00:02:56.520 dma/dpaa2: not in enabled drivers build config 00:02:56.521 dma/hisilicon: not in enabled drivers build config 00:02:56.521 dma/idxd: not in enabled drivers build config 00:02:56.521 dma/ioat: not in enabled drivers build config 00:02:56.521 dma/skeleton: not in enabled drivers build config 00:02:56.521 net/af_packet: not in enabled drivers build config 00:02:56.521 net/af_xdp: not in enabled drivers build config 00:02:56.521 net/ark: not in enabled drivers build config 00:02:56.521 net/atlantic: not in enabled drivers build config 00:02:56.521 net/avp: not in enabled drivers build config 00:02:56.521 net/axgbe: not in enabled drivers build config 00:02:56.521 net/bnx2x: not in enabled drivers build config 00:02:56.521 net/bnxt: not in enabled drivers build config 00:02:56.521 net/bonding: not in enabled drivers build config 00:02:56.521 net/cnxk: not in enabled drivers build config 00:02:56.521 net/cpfl: not in enabled drivers build config 00:02:56.521 net/cxgbe: not in enabled drivers build config 00:02:56.521 net/dpaa: not in enabled drivers build config 00:02:56.521 net/dpaa2: not in enabled drivers build config 00:02:56.521 net/e1000: not in enabled drivers build config 00:02:56.521 net/ena: not in enabled drivers build config 00:02:56.521 net/enetc: not in enabled drivers build config 00:02:56.521 net/enetfec: not in enabled drivers build config 00:02:56.521 net/enic: not in enabled drivers build config 00:02:56.521 net/failsafe: not in enabled drivers build config 00:02:56.521 net/fm10k: not in enabled drivers build config 00:02:56.521 net/gve: not in enabled drivers build config 00:02:56.521 net/hinic: not in enabled drivers build config 00:02:56.521 net/hns3: not in enabled drivers build config 00:02:56.521 net/i40e: not in enabled drivers build config 00:02:56.521 net/iavf: not in enabled drivers build config 00:02:56.521 net/ice: not in enabled drivers build config 00:02:56.521 net/idpf: not in enabled drivers build config 00:02:56.521 net/igc: not in enabled drivers build config 00:02:56.521 net/ionic: not in enabled drivers build config 00:02:56.521 net/ipn3ke: not in enabled drivers build config 00:02:56.521 net/ixgbe: not in enabled drivers build config 00:02:56.521 net/mana: not in enabled drivers build config 00:02:56.521 net/memif: not in enabled drivers build config 00:02:56.521 net/mlx4: not in enabled drivers build config 00:02:56.521 net/mlx5: not in enabled drivers build config 00:02:56.521 net/mvneta: not in enabled drivers build config 00:02:56.521 net/mvpp2: not in enabled drivers build config 00:02:56.521 net/netvsc: not in enabled drivers build config 00:02:56.521 net/nfb: not in enabled drivers build config 00:02:56.521 net/nfp: not in enabled drivers build config 00:02:56.521 net/ngbe: not in enabled drivers build config 00:02:56.521 net/null: not in enabled drivers build config 00:02:56.521 net/octeontx: not in enabled drivers build config 00:02:56.521 net/octeon_ep: not in enabled drivers build config 00:02:56.521 net/pcap: not in enabled drivers build config 00:02:56.521 net/pfe: not in enabled drivers build config 00:02:56.521 net/qede: not in enabled drivers build config 00:02:56.521 net/ring: not in enabled drivers build config 00:02:56.521 net/sfc: not in enabled drivers build config 00:02:56.521 net/softnic: not in enabled drivers build config 00:02:56.521 net/tap: not in enabled drivers build config 00:02:56.521 net/thunderx: not in enabled drivers build config 00:02:56.521 net/txgbe: not in enabled drivers build config 00:02:56.521 net/vdev_netvsc: not in enabled drivers build config 00:02:56.521 net/vhost: not in enabled drivers build config 00:02:56.521 net/virtio: not in enabled drivers build config 00:02:56.521 net/vmxnet3: not in enabled drivers build config 00:02:56.521 raw/*: missing internal dependency, "rawdev" 00:02:56.521 crypto/armv8: not in enabled drivers build config 00:02:56.521 crypto/bcmfs: not in enabled drivers build config 00:02:56.521 crypto/caam_jr: not in enabled drivers build config 00:02:56.521 crypto/ccp: not in enabled drivers build config 00:02:56.521 crypto/cnxk: not in enabled drivers build config 00:02:56.521 crypto/dpaa_sec: not in enabled drivers build config 00:02:56.521 crypto/dpaa2_sec: not in enabled drivers build config 00:02:56.521 crypto/ipsec_mb: not in enabled drivers build config 00:02:56.521 crypto/mlx5: not in enabled drivers build config 00:02:56.521 crypto/mvsam: not in enabled drivers build config 00:02:56.521 crypto/nitrox: not in enabled drivers build config 00:02:56.521 crypto/null: not in enabled drivers build config 00:02:56.521 crypto/octeontx: not in enabled drivers build config 00:02:56.521 crypto/openssl: not in enabled drivers build config 00:02:56.521 crypto/scheduler: not in enabled drivers build config 00:02:56.521 crypto/uadk: not in enabled drivers build config 00:02:56.521 crypto/virtio: not in enabled drivers build config 00:02:56.521 compress/isal: not in enabled drivers build config 00:02:56.521 compress/mlx5: not in enabled drivers build config 00:02:56.521 compress/octeontx: not in enabled drivers build config 00:02:56.521 compress/zlib: not in enabled drivers build config 00:02:56.521 regex/*: missing internal dependency, "regexdev" 00:02:56.521 ml/*: missing internal dependency, "mldev" 00:02:56.521 vdpa/ifc: not in enabled drivers build config 00:02:56.521 vdpa/mlx5: not in enabled drivers build config 00:02:56.521 vdpa/nfp: not in enabled drivers build config 00:02:56.521 vdpa/sfc: not in enabled drivers build config 00:02:56.521 event/*: missing internal dependency, "eventdev" 00:02:56.521 baseband/*: missing internal dependency, "bbdev" 00:02:56.521 gpu/*: missing internal dependency, "gpudev" 00:02:56.521 00:02:56.521 00:02:56.521 Build targets in project: 85 00:02:56.521 00:02:56.521 DPDK 23.11.0 00:02:56.521 00:02:56.521 User defined options 00:02:56.521 buildtype : debug 00:02:56.521 default_library : shared 00:02:56.521 libdir : lib 00:02:56.521 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:56.521 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:56.521 c_link_args : 00:02:56.521 cpu_instruction_set: native 00:02:56.521 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:56.521 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:56.521 enable_docs : false 00:02:56.521 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:56.521 enable_kmods : false 00:02:56.521 tests : false 00:02:56.521 00:02:56.521 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:56.521 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:56.521 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:56.521 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:56.521 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:56.521 [4/265] Linking static target lib/librte_kvargs.a 00:02:56.521 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:56.521 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:56.521 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:56.521 [8/265] Linking static target lib/librte_log.a 00:02:56.521 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:56.521 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:56.521 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.521 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:56.521 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:56.521 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:56.521 [15/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.521 [16/265] Linking target lib/librte_log.so.24.0 00:02:56.521 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:56.521 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:56.521 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:56.521 [20/265] Linking static target lib/librte_telemetry.a 00:02:56.521 [21/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:56.521 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:56.521 [23/265] Linking target lib/librte_kvargs.so.24.0 00:02:56.780 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:56.780 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:57.038 [26/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:57.038 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:57.296 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:57.554 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:57.554 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:57.554 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.554 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:57.554 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:57.554 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:57.812 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:58.069 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:58.070 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:58.327 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:58.584 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:58.584 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:58.584 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:58.585 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:58.585 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:58.585 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:59.219 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:59.219 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:59.219 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:59.497 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:59.497 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:59.754 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:59.754 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:00.320 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:00.320 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:00.320 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:00.578 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:00.578 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:00.836 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:00.836 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:00.836 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:00.836 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:01.094 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:01.094 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:01.351 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:01.608 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:01.865 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:01.865 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:01.865 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:01.865 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:01.865 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:02.122 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:02.380 [71/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:02.380 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:02.380 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:02.638 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:02.638 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:02.638 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:02.638 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:02.896 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:03.154 [79/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:03.154 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:03.154 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:03.411 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:03.411 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:04.341 [84/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:04.341 [85/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:04.341 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:04.341 [87/265] Linking static target lib/librte_ring.a 00:03:04.341 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:04.341 [89/265] Linking static target lib/librte_eal.a 00:03:04.597 [90/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:04.597 [91/265] Linking static target lib/librte_rcu.a 00:03:04.597 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:04.853 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.111 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:05.111 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:05.111 [96/265] Linking static target lib/librte_mempool.a 00:03:05.111 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:05.677 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:05.677 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.677 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:05.677 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:06.242 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:06.242 [103/265] Linking static target lib/librte_mbuf.a 00:03:06.242 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:06.500 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:06.500 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:06.758 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:07.015 [108/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.015 [109/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:07.015 [110/265] Linking static target lib/librte_net.a 00:03:07.274 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:07.274 [112/265] Linking static target lib/librte_meter.a 00:03:07.274 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:07.839 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.839 [115/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.097 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.097 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:08.097 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:08.661 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:08.920 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:09.487 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:09.487 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:09.487 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:09.745 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:09.745 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:09.745 [126/265] Linking static target lib/librte_pci.a 00:03:09.745 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:09.745 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:10.002 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:10.003 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:10.003 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:10.261 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:10.261 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:10.261 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:10.261 [135/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.519 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:10.519 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:10.519 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:10.519 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:10.519 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:10.519 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:10.519 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:10.519 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:11.084 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:11.084 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:11.084 [146/265] Linking static target lib/librte_ethdev.a 00:03:11.084 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:11.084 [148/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:11.084 [149/265] Linking static target lib/librte_cmdline.a 00:03:11.649 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:11.649 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:11.649 [152/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:11.649 [153/265] Linking static target lib/librte_timer.a 00:03:11.649 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:12.215 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:12.215 [156/265] Linking static target lib/librte_hash.a 00:03:12.215 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:12.473 [158/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.473 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:12.730 [160/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:12.730 [161/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:12.730 [162/265] Linking static target lib/librte_compressdev.a 00:03:12.990 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:13.254 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:13.254 [165/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.512 [166/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:13.512 [167/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:13.512 [168/265] Linking static target lib/librte_dmadev.a 00:03:13.769 [169/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:13.769 [170/265] Linking static target lib/librte_cryptodev.a 00:03:13.769 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:13.769 [172/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:13.769 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.026 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:14.283 [175/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.539 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.539 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:14.796 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:14.796 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:14.796 [180/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:14.796 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:15.361 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:15.361 [183/265] Linking static target lib/librte_power.a 00:03:15.966 [184/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:15.966 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:15.966 [186/265] Linking static target lib/librte_security.a 00:03:15.966 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:16.223 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:16.223 [189/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:16.223 [190/265] Linking static target lib/librte_reorder.a 00:03:17.158 [191/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.158 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:17.158 [193/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.158 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:17.158 [195/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.158 [196/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.158 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:17.723 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:17.981 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:17.981 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.239 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:18.239 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:18.239 [203/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:18.239 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:18.239 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:18.239 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:18.239 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:18.804 [208/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:18.804 [209/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.804 [210/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.804 [211/265] Linking static target drivers/librte_bus_pci.a 00:03:18.804 [212/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:18.804 [213/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:19.061 [214/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:19.061 [215/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:19.319 [216/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:19.319 [217/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.319 [218/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.319 [219/265] Linking static target drivers/librte_bus_vdev.a 00:03:19.319 [220/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.577 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:19.577 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.577 [223/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.577 [224/265] Linking static target drivers/librte_mempool_ring.a 00:03:19.577 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.577 [226/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.577 [227/265] Linking target lib/librte_eal.so.24.0 00:03:19.835 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:19.835 [229/265] Linking target lib/librte_ring.so.24.0 00:03:19.835 [230/265] Linking target lib/librte_meter.so.24.0 00:03:19.835 [231/265] Linking target lib/librte_pci.so.24.0 00:03:19.835 [232/265] Linking target lib/librte_timer.so.24.0 00:03:19.835 [233/265] Linking target lib/librte_dmadev.so.24.0 00:03:19.835 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:20.093 [235/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:20.093 [236/265] Linking target lib/librte_rcu.so.24.0 00:03:20.093 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:20.093 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:20.093 [239/265] Linking target lib/librte_mempool.so.24.0 00:03:20.093 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:20.093 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:20.093 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:20.350 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:20.350 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:20.350 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:20.350 [246/265] Linking target lib/librte_mbuf.so.24.0 00:03:20.608 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:20.608 [248/265] Linking target lib/librte_reorder.so.24.0 00:03:20.608 [249/265] Linking target lib/librte_compressdev.so.24.0 00:03:20.608 [250/265] Linking target lib/librte_net.so.24.0 00:03:20.608 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:03:20.866 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:20.866 [253/265] Linking target lib/librte_hash.so.24.0 00:03:20.866 [254/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.866 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:20.866 [256/265] Linking target lib/librte_cmdline.so.24.0 00:03:20.866 [257/265] Linking target lib/librte_security.so.24.0 00:03:20.866 [258/265] Linking target lib/librte_ethdev.so.24.0 00:03:21.156 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:21.156 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:21.156 [261/265] Linking target lib/librte_power.so.24.0 00:03:21.730 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:21.730 [263/265] Linking static target lib/librte_vhost.a 00:03:23.103 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.103 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:23.103 INFO: autodetecting backend as ninja 00:03:23.103 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:24.475 CC lib/ut_mock/mock.o 00:03:24.475 CC lib/log/log.o 00:03:24.475 CC lib/ut/ut.o 00:03:24.475 CC lib/log/log_flags.o 00:03:24.475 CC lib/log/log_deprecated.o 00:03:24.733 LIB libspdk_ut_mock.a 00:03:24.733 LIB libspdk_ut.a 00:03:24.733 SO libspdk_ut_mock.so.6.0 00:03:24.733 SO libspdk_ut.so.2.0 00:03:24.733 LIB libspdk_log.a 00:03:24.733 SYMLINK libspdk_ut_mock.so 00:03:24.733 SYMLINK libspdk_ut.so 00:03:24.733 SO libspdk_log.so.7.0 00:03:24.733 SYMLINK libspdk_log.so 00:03:24.990 CXX lib/trace_parser/trace.o 00:03:24.990 CC lib/dma/dma.o 00:03:24.990 CC lib/util/base64.o 00:03:24.990 CC lib/util/bit_array.o 00:03:24.990 CC lib/util/cpuset.o 00:03:24.990 CC lib/util/crc16.o 00:03:24.990 CC lib/util/crc32.o 00:03:24.990 CC lib/ioat/ioat.o 00:03:24.990 CC lib/util/crc32c.o 00:03:25.248 CC lib/vfio_user/host/vfio_user_pci.o 00:03:25.248 CC lib/vfio_user/host/vfio_user.o 00:03:25.248 CC lib/util/crc32_ieee.o 00:03:25.248 LIB libspdk_dma.a 00:03:25.248 CC lib/util/crc64.o 00:03:25.248 SO libspdk_dma.so.4.0 00:03:25.248 LIB libspdk_ioat.a 00:03:25.506 CC lib/util/dif.o 00:03:25.506 CC lib/util/fd.o 00:03:25.506 CC lib/util/file.o 00:03:25.506 SO libspdk_ioat.so.7.0 00:03:25.506 SYMLINK libspdk_dma.so 00:03:25.506 CC lib/util/hexlify.o 00:03:25.506 CC lib/util/iov.o 00:03:25.506 SYMLINK libspdk_ioat.so 00:03:25.506 CC lib/util/math.o 00:03:25.506 CC lib/util/pipe.o 00:03:25.506 CC lib/util/strerror_tls.o 00:03:25.506 CC lib/util/string.o 00:03:25.506 LIB libspdk_vfio_user.a 00:03:25.506 CC lib/util/uuid.o 00:03:25.506 CC lib/util/fd_group.o 00:03:25.506 SO libspdk_vfio_user.so.5.0 00:03:25.764 CC lib/util/xor.o 00:03:25.764 CC lib/util/zipf.o 00:03:25.764 SYMLINK libspdk_vfio_user.so 00:03:26.021 LIB libspdk_util.a 00:03:26.280 SO libspdk_util.so.9.0 00:03:26.280 LIB libspdk_trace_parser.a 00:03:26.280 SO libspdk_trace_parser.so.5.0 00:03:26.538 SYMLINK libspdk_util.so 00:03:26.538 SYMLINK libspdk_trace_parser.so 00:03:26.538 CC lib/rdma/common.o 00:03:26.538 CC lib/rdma/rdma_verbs.o 00:03:26.538 CC lib/idxd/idxd.o 00:03:26.538 CC lib/conf/conf.o 00:03:26.538 CC lib/idxd/idxd_user.o 00:03:26.538 CC lib/env_dpdk/memory.o 00:03:26.538 CC lib/env_dpdk/pci.o 00:03:26.538 CC lib/env_dpdk/env.o 00:03:26.538 CC lib/json/json_parse.o 00:03:26.538 CC lib/vmd/vmd.o 00:03:26.814 CC lib/vmd/led.o 00:03:26.814 LIB libspdk_conf.a 00:03:26.814 CC lib/json/json_util.o 00:03:26.814 CC lib/json/json_write.o 00:03:26.814 SO libspdk_conf.so.6.0 00:03:26.814 SYMLINK libspdk_conf.so 00:03:27.101 CC lib/env_dpdk/init.o 00:03:27.101 LIB libspdk_rdma.a 00:03:27.101 CC lib/env_dpdk/threads.o 00:03:27.101 SO libspdk_rdma.so.6.0 00:03:27.101 CC lib/env_dpdk/pci_ioat.o 00:03:27.101 SYMLINK libspdk_rdma.so 00:03:27.101 CC lib/env_dpdk/pci_virtio.o 00:03:27.101 CC lib/env_dpdk/pci_vmd.o 00:03:27.101 LIB libspdk_idxd.a 00:03:27.101 CC lib/env_dpdk/pci_idxd.o 00:03:27.101 LIB libspdk_json.a 00:03:27.101 SO libspdk_idxd.so.12.0 00:03:27.101 SO libspdk_json.so.6.0 00:03:27.359 CC lib/env_dpdk/pci_event.o 00:03:27.359 SYMLINK libspdk_idxd.so 00:03:27.359 CC lib/env_dpdk/sigbus_handler.o 00:03:27.359 CC lib/env_dpdk/pci_dpdk.o 00:03:27.359 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:27.359 SYMLINK libspdk_json.so 00:03:27.359 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:27.359 LIB libspdk_vmd.a 00:03:27.359 SO libspdk_vmd.so.6.0 00:03:27.359 SYMLINK libspdk_vmd.so 00:03:27.616 CC lib/jsonrpc/jsonrpc_server.o 00:03:27.616 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:27.616 CC lib/jsonrpc/jsonrpc_client.o 00:03:27.616 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:27.874 LIB libspdk_jsonrpc.a 00:03:27.874 SO libspdk_jsonrpc.so.6.0 00:03:27.874 SYMLINK libspdk_jsonrpc.so 00:03:28.133 LIB libspdk_env_dpdk.a 00:03:28.133 CC lib/rpc/rpc.o 00:03:28.133 SO libspdk_env_dpdk.so.14.0 00:03:28.391 SYMLINK libspdk_env_dpdk.so 00:03:28.391 LIB libspdk_rpc.a 00:03:28.391 SO libspdk_rpc.so.6.0 00:03:28.391 SYMLINK libspdk_rpc.so 00:03:28.649 CC lib/keyring/keyring.o 00:03:28.649 CC lib/keyring/keyring_rpc.o 00:03:28.649 CC lib/notify/notify.o 00:03:28.649 CC lib/trace/trace.o 00:03:28.649 CC lib/trace/trace_flags.o 00:03:28.649 CC lib/notify/notify_rpc.o 00:03:28.649 CC lib/trace/trace_rpc.o 00:03:28.906 LIB libspdk_notify.a 00:03:28.906 LIB libspdk_keyring.a 00:03:28.906 SO libspdk_keyring.so.1.0 00:03:28.906 SO libspdk_notify.so.6.0 00:03:29.164 SYMLINK libspdk_notify.so 00:03:29.164 SYMLINK libspdk_keyring.so 00:03:29.164 LIB libspdk_trace.a 00:03:29.164 SO libspdk_trace.so.10.0 00:03:29.164 SYMLINK libspdk_trace.so 00:03:29.422 CC lib/sock/sock_rpc.o 00:03:29.422 CC lib/sock/sock.o 00:03:29.422 CC lib/thread/thread.o 00:03:29.422 CC lib/thread/iobuf.o 00:03:29.985 LIB libspdk_sock.a 00:03:29.985 SO libspdk_sock.so.9.0 00:03:29.985 SYMLINK libspdk_sock.so 00:03:30.243 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:30.243 CC lib/nvme/nvme_ctrlr.o 00:03:30.243 CC lib/nvme/nvme_ns.o 00:03:30.243 CC lib/nvme/nvme_ns_cmd.o 00:03:30.243 CC lib/nvme/nvme_fabric.o 00:03:30.243 CC lib/nvme/nvme_pcie.o 00:03:30.243 CC lib/nvme/nvme_pcie_common.o 00:03:30.243 CC lib/nvme/nvme_qpair.o 00:03:30.243 CC lib/nvme/nvme.o 00:03:31.186 CC lib/nvme/nvme_quirks.o 00:03:31.186 LIB libspdk_thread.a 00:03:31.186 SO libspdk_thread.so.10.0 00:03:31.186 CC lib/nvme/nvme_transport.o 00:03:31.186 SYMLINK libspdk_thread.so 00:03:31.186 CC lib/nvme/nvme_discovery.o 00:03:31.186 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:31.186 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:31.186 CC lib/nvme/nvme_tcp.o 00:03:31.444 CC lib/nvme/nvme_opal.o 00:03:31.444 CC lib/nvme/nvme_io_msg.o 00:03:31.444 CC lib/nvme/nvme_poll_group.o 00:03:31.701 CC lib/nvme/nvme_zns.o 00:03:31.959 CC lib/nvme/nvme_stubs.o 00:03:31.959 CC lib/nvme/nvme_auth.o 00:03:31.959 CC lib/nvme/nvme_cuse.o 00:03:31.959 CC lib/accel/accel.o 00:03:31.959 CC lib/blob/blobstore.o 00:03:32.217 CC lib/blob/request.o 00:03:32.217 CC lib/blob/zeroes.o 00:03:32.475 CC lib/nvme/nvme_rdma.o 00:03:32.475 CC lib/accel/accel_rpc.o 00:03:32.732 CC lib/init/json_config.o 00:03:32.732 CC lib/virtio/virtio.o 00:03:32.732 CC lib/accel/accel_sw.o 00:03:32.990 CC lib/init/subsystem.o 00:03:32.990 CC lib/init/subsystem_rpc.o 00:03:33.248 CC lib/init/rpc.o 00:03:33.248 CC lib/blob/blob_bs_dev.o 00:03:33.248 CC lib/virtio/virtio_vhost_user.o 00:03:33.248 CC lib/virtio/virtio_vfio_user.o 00:03:33.248 LIB libspdk_accel.a 00:03:33.248 CC lib/virtio/virtio_pci.o 00:03:33.248 SO libspdk_accel.so.15.0 00:03:33.505 SYMLINK libspdk_accel.so 00:03:33.505 LIB libspdk_init.a 00:03:33.505 SO libspdk_init.so.5.0 00:03:33.505 SYMLINK libspdk_init.so 00:03:33.505 LIB libspdk_virtio.a 00:03:33.763 CC lib/bdev/bdev_rpc.o 00:03:33.763 CC lib/bdev/bdev.o 00:03:33.763 CC lib/bdev/bdev_zone.o 00:03:33.763 CC lib/bdev/part.o 00:03:33.763 CC lib/bdev/scsi_nvme.o 00:03:33.763 SO libspdk_virtio.so.7.0 00:03:33.763 SYMLINK libspdk_virtio.so 00:03:33.763 CC lib/event/app.o 00:03:33.763 CC lib/event/reactor.o 00:03:33.763 CC lib/event/log_rpc.o 00:03:33.763 CC lib/event/app_rpc.o 00:03:34.021 CC lib/event/scheduler_static.o 00:03:34.279 LIB libspdk_nvme.a 00:03:34.279 LIB libspdk_event.a 00:03:34.536 SO libspdk_event.so.13.0 00:03:34.536 SYMLINK libspdk_event.so 00:03:34.536 SO libspdk_nvme.so.13.0 00:03:35.101 SYMLINK libspdk_nvme.so 00:03:36.033 LIB libspdk_blob.a 00:03:36.033 SO libspdk_blob.so.11.0 00:03:36.033 SYMLINK libspdk_blob.so 00:03:36.291 CC lib/blobfs/blobfs.o 00:03:36.291 CC lib/blobfs/tree.o 00:03:36.291 CC lib/lvol/lvol.o 00:03:37.224 LIB libspdk_bdev.a 00:03:37.224 SO libspdk_bdev.so.15.0 00:03:37.224 LIB libspdk_blobfs.a 00:03:37.224 SO libspdk_blobfs.so.10.0 00:03:37.224 LIB libspdk_lvol.a 00:03:37.224 SO libspdk_lvol.so.10.0 00:03:37.224 SYMLINK libspdk_bdev.so 00:03:37.224 SYMLINK libspdk_blobfs.so 00:03:37.482 SYMLINK libspdk_lvol.so 00:03:37.482 CC lib/scsi/lun.o 00:03:37.482 CC lib/scsi/dev.o 00:03:37.482 CC lib/scsi/port.o 00:03:37.482 CC lib/scsi/scsi.o 00:03:37.482 CC lib/ublk/ublk.o 00:03:37.482 CC lib/scsi/scsi_bdev.o 00:03:37.482 CC lib/ublk/ublk_rpc.o 00:03:37.482 CC lib/ftl/ftl_core.o 00:03:37.482 CC lib/nbd/nbd.o 00:03:37.482 CC lib/nvmf/ctrlr.o 00:03:37.739 CC lib/nbd/nbd_rpc.o 00:03:37.739 CC lib/nvmf/ctrlr_discovery.o 00:03:37.739 CC lib/ftl/ftl_init.o 00:03:37.997 CC lib/scsi/scsi_pr.o 00:03:37.997 CC lib/scsi/scsi_rpc.o 00:03:37.997 CC lib/scsi/task.o 00:03:37.997 CC lib/nvmf/ctrlr_bdev.o 00:03:37.997 CC lib/ftl/ftl_layout.o 00:03:37.997 CC lib/ftl/ftl_debug.o 00:03:38.255 CC lib/ftl/ftl_io.o 00:03:38.255 LIB libspdk_nbd.a 00:03:38.255 CC lib/nvmf/subsystem.o 00:03:38.255 SO libspdk_nbd.so.7.0 00:03:38.255 CC lib/nvmf/nvmf.o 00:03:38.512 CC lib/ftl/ftl_sb.o 00:03:38.512 SYMLINK libspdk_nbd.so 00:03:38.512 CC lib/ftl/ftl_l2p.o 00:03:38.512 CC lib/ftl/ftl_l2p_flat.o 00:03:38.512 LIB libspdk_scsi.a 00:03:38.512 LIB libspdk_ublk.a 00:03:38.512 SO libspdk_scsi.so.9.0 00:03:38.512 SO libspdk_ublk.so.3.0 00:03:38.512 CC lib/nvmf/nvmf_rpc.o 00:03:38.769 CC lib/nvmf/transport.o 00:03:38.770 SYMLINK libspdk_ublk.so 00:03:38.770 CC lib/nvmf/tcp.o 00:03:38.770 CC lib/nvmf/stubs.o 00:03:38.770 SYMLINK libspdk_scsi.so 00:03:38.770 CC lib/ftl/ftl_nv_cache.o 00:03:38.770 CC lib/nvmf/rdma.o 00:03:39.335 CC lib/nvmf/auth.o 00:03:39.335 CC lib/ftl/ftl_band.o 00:03:39.592 CC lib/ftl/ftl_band_ops.o 00:03:39.592 CC lib/ftl/ftl_writer.o 00:03:39.850 CC lib/ftl/ftl_rq.o 00:03:40.108 CC lib/ftl/ftl_reloc.o 00:03:40.108 CC lib/ftl/ftl_l2p_cache.o 00:03:40.108 CC lib/vhost/vhost.o 00:03:40.108 CC lib/iscsi/conn.o 00:03:40.108 CC lib/ftl/ftl_p2l.o 00:03:40.108 CC lib/ftl/mngt/ftl_mngt.o 00:03:40.365 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:40.365 CC lib/vhost/vhost_rpc.o 00:03:40.634 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:40.634 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:40.892 CC lib/iscsi/init_grp.o 00:03:40.892 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:40.892 CC lib/vhost/vhost_scsi.o 00:03:40.892 CC lib/vhost/vhost_blk.o 00:03:40.892 CC lib/vhost/rte_vhost_user.o 00:03:41.150 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:41.150 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:41.150 CC lib/iscsi/iscsi.o 00:03:41.150 CC lib/iscsi/md5.o 00:03:41.150 CC lib/iscsi/param.o 00:03:41.408 CC lib/iscsi/portal_grp.o 00:03:41.408 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:41.408 CC lib/iscsi/tgt_node.o 00:03:41.408 CC lib/iscsi/iscsi_subsystem.o 00:03:41.666 CC lib/iscsi/iscsi_rpc.o 00:03:41.666 CC lib/iscsi/task.o 00:03:41.666 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:41.923 LIB libspdk_nvmf.a 00:03:41.923 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:41.923 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:41.923 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:41.923 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:41.923 SO libspdk_nvmf.so.18.0 00:03:42.181 CC lib/ftl/utils/ftl_conf.o 00:03:42.181 CC lib/ftl/utils/ftl_md.o 00:03:42.181 CC lib/ftl/utils/ftl_mempool.o 00:03:42.181 CC lib/ftl/utils/ftl_bitmap.o 00:03:42.181 CC lib/ftl/utils/ftl_property.o 00:03:42.181 SYMLINK libspdk_nvmf.so 00:03:42.181 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:42.181 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:42.438 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:42.438 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:42.438 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:42.438 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:42.438 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:42.438 LIB libspdk_vhost.a 00:03:42.438 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:42.438 SO libspdk_vhost.so.8.0 00:03:42.438 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:42.438 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:42.696 CC lib/ftl/base/ftl_base_dev.o 00:03:42.696 CC lib/ftl/base/ftl_base_bdev.o 00:03:42.696 CC lib/ftl/ftl_trace.o 00:03:42.696 SYMLINK libspdk_vhost.so 00:03:42.954 LIB libspdk_iscsi.a 00:03:42.954 LIB libspdk_ftl.a 00:03:42.954 SO libspdk_iscsi.so.8.0 00:03:43.212 SO libspdk_ftl.so.9.0 00:03:43.212 SYMLINK libspdk_iscsi.so 00:03:43.471 SYMLINK libspdk_ftl.so 00:03:43.729 CC module/env_dpdk/env_dpdk_rpc.o 00:03:43.987 CC module/scheduler/gscheduler/gscheduler.o 00:03:43.987 CC module/keyring/file/keyring.o 00:03:43.987 CC module/sock/posix/posix.o 00:03:43.987 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:43.987 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:43.987 CC module/accel/dsa/accel_dsa.o 00:03:43.987 CC module/accel/ioat/accel_ioat.o 00:03:43.987 CC module/accel/error/accel_error.o 00:03:43.987 LIB libspdk_env_dpdk_rpc.a 00:03:43.987 CC module/blob/bdev/blob_bdev.o 00:03:43.987 SO libspdk_env_dpdk_rpc.so.6.0 00:03:43.987 SYMLINK libspdk_env_dpdk_rpc.so 00:03:43.987 CC module/accel/error/accel_error_rpc.o 00:03:43.987 LIB libspdk_scheduler_gscheduler.a 00:03:44.245 SO libspdk_scheduler_gscheduler.so.4.0 00:03:44.245 CC module/accel/ioat/accel_ioat_rpc.o 00:03:44.245 LIB libspdk_scheduler_dynamic.a 00:03:44.245 CC module/keyring/file/keyring_rpc.o 00:03:44.245 SYMLINK libspdk_scheduler_gscheduler.so 00:03:44.245 CC module/accel/dsa/accel_dsa_rpc.o 00:03:44.245 SO libspdk_scheduler_dynamic.so.4.0 00:03:44.245 LIB libspdk_scheduler_dpdk_governor.a 00:03:44.245 SYMLINK libspdk_scheduler_dynamic.so 00:03:44.245 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:44.245 LIB libspdk_accel_error.a 00:03:44.245 LIB libspdk_blob_bdev.a 00:03:44.245 SO libspdk_blob_bdev.so.11.0 00:03:44.245 SO libspdk_accel_error.so.2.0 00:03:44.245 LIB libspdk_accel_ioat.a 00:03:44.245 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:44.245 LIB libspdk_keyring_file.a 00:03:44.245 SO libspdk_accel_ioat.so.6.0 00:03:44.245 SYMLINK libspdk_blob_bdev.so 00:03:44.245 SYMLINK libspdk_accel_error.so 00:03:44.245 LIB libspdk_accel_dsa.a 00:03:44.503 SO libspdk_keyring_file.so.1.0 00:03:44.503 SYMLINK libspdk_accel_ioat.so 00:03:44.503 SO libspdk_accel_dsa.so.5.0 00:03:44.503 CC module/accel/iaa/accel_iaa.o 00:03:44.503 CC module/accel/iaa/accel_iaa_rpc.o 00:03:44.503 SYMLINK libspdk_keyring_file.so 00:03:44.503 SYMLINK libspdk_accel_dsa.so 00:03:44.503 CC module/bdev/delay/vbdev_delay.o 00:03:44.503 CC module/bdev/error/vbdev_error.o 00:03:44.503 CC module/bdev/malloc/bdev_malloc.o 00:03:44.503 CC module/bdev/gpt/gpt.o 00:03:44.760 CC module/bdev/lvol/vbdev_lvol.o 00:03:44.760 CC module/blobfs/bdev/blobfs_bdev.o 00:03:44.760 LIB libspdk_accel_iaa.a 00:03:44.760 CC module/bdev/null/bdev_null.o 00:03:44.760 SO libspdk_accel_iaa.so.3.0 00:03:44.760 LIB libspdk_sock_posix.a 00:03:44.760 CC module/bdev/nvme/bdev_nvme.o 00:03:44.760 SO libspdk_sock_posix.so.6.0 00:03:44.760 SYMLINK libspdk_accel_iaa.so 00:03:44.760 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:44.760 CC module/bdev/gpt/vbdev_gpt.o 00:03:44.760 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:44.760 SYMLINK libspdk_sock_posix.so 00:03:44.760 CC module/bdev/error/vbdev_error_rpc.o 00:03:45.027 CC module/bdev/null/bdev_null_rpc.o 00:03:45.027 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:45.027 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:45.027 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:45.027 LIB libspdk_blobfs_bdev.a 00:03:45.027 LIB libspdk_bdev_error.a 00:03:45.027 SO libspdk_blobfs_bdev.so.6.0 00:03:45.028 SO libspdk_bdev_error.so.6.0 00:03:45.028 LIB libspdk_bdev_null.a 00:03:45.028 SYMLINK libspdk_blobfs_bdev.so 00:03:45.028 LIB libspdk_bdev_gpt.a 00:03:45.028 SO libspdk_bdev_null.so.6.0 00:03:45.028 CC module/bdev/nvme/nvme_rpc.o 00:03:45.028 SYMLINK libspdk_bdev_error.so 00:03:45.028 CC module/bdev/nvme/bdev_mdns_client.o 00:03:45.028 SO libspdk_bdev_gpt.so.6.0 00:03:45.285 LIB libspdk_bdev_delay.a 00:03:45.285 LIB libspdk_bdev_malloc.a 00:03:45.285 SO libspdk_bdev_malloc.so.6.0 00:03:45.285 SYMLINK libspdk_bdev_null.so 00:03:45.285 SO libspdk_bdev_delay.so.6.0 00:03:45.285 SYMLINK libspdk_bdev_gpt.so 00:03:45.285 CC module/bdev/nvme/vbdev_opal.o 00:03:45.285 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:45.285 LIB libspdk_bdev_lvol.a 00:03:45.285 SYMLINK libspdk_bdev_malloc.so 00:03:45.285 SYMLINK libspdk_bdev_delay.so 00:03:45.285 SO libspdk_bdev_lvol.so.6.0 00:03:45.285 CC module/bdev/passthru/vbdev_passthru.o 00:03:45.285 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:45.285 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:45.543 SYMLINK libspdk_bdev_lvol.so 00:03:45.543 CC module/bdev/split/vbdev_split.o 00:03:45.543 CC module/bdev/raid/bdev_raid.o 00:03:45.543 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:45.543 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:45.543 CC module/bdev/aio/bdev_aio.o 00:03:45.543 CC module/bdev/ftl/bdev_ftl.o 00:03:45.801 LIB libspdk_bdev_passthru.a 00:03:45.801 CC module/bdev/iscsi/bdev_iscsi.o 00:03:45.801 SO libspdk_bdev_passthru.so.6.0 00:03:45.801 CC module/bdev/split/vbdev_split_rpc.o 00:03:45.801 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:45.801 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:45.801 SYMLINK libspdk_bdev_passthru.so 00:03:45.801 CC module/bdev/aio/bdev_aio_rpc.o 00:03:45.801 LIB libspdk_bdev_split.a 00:03:46.060 LIB libspdk_bdev_zone_block.a 00:03:46.060 SO libspdk_bdev_split.so.6.0 00:03:46.060 CC module/bdev/raid/bdev_raid_rpc.o 00:03:46.060 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:46.060 CC module/bdev/raid/bdev_raid_sb.o 00:03:46.060 SO libspdk_bdev_zone_block.so.6.0 00:03:46.060 LIB libspdk_bdev_aio.a 00:03:46.060 SO libspdk_bdev_aio.so.6.0 00:03:46.060 SYMLINK libspdk_bdev_split.so 00:03:46.060 SYMLINK libspdk_bdev_zone_block.so 00:03:46.060 CC module/bdev/raid/raid0.o 00:03:46.060 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:46.060 LIB libspdk_bdev_iscsi.a 00:03:46.060 SYMLINK libspdk_bdev_aio.so 00:03:46.060 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:46.060 SO libspdk_bdev_iscsi.so.6.0 00:03:46.060 LIB libspdk_bdev_ftl.a 00:03:46.060 SYMLINK libspdk_bdev_iscsi.so 00:03:46.060 CC module/bdev/raid/raid1.o 00:03:46.060 CC module/bdev/raid/concat.o 00:03:46.319 SO libspdk_bdev_ftl.so.6.0 00:03:46.319 SYMLINK libspdk_bdev_ftl.so 00:03:46.319 LIB libspdk_bdev_virtio.a 00:03:46.319 SO libspdk_bdev_virtio.so.6.0 00:03:46.577 LIB libspdk_bdev_raid.a 00:03:46.577 SYMLINK libspdk_bdev_virtio.so 00:03:46.577 SO libspdk_bdev_raid.so.6.0 00:03:46.577 SYMLINK libspdk_bdev_raid.so 00:03:47.143 LIB libspdk_bdev_nvme.a 00:03:47.143 SO libspdk_bdev_nvme.so.7.0 00:03:47.401 SYMLINK libspdk_bdev_nvme.so 00:03:47.965 CC module/event/subsystems/scheduler/scheduler.o 00:03:47.965 CC module/event/subsystems/sock/sock.o 00:03:47.965 CC module/event/subsystems/vmd/vmd.o 00:03:47.965 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:47.965 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:47.965 CC module/event/subsystems/iobuf/iobuf.o 00:03:47.965 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:47.965 CC module/event/subsystems/keyring/keyring.o 00:03:47.965 LIB libspdk_event_sock.a 00:03:47.965 LIB libspdk_event_keyring.a 00:03:47.965 SO libspdk_event_sock.so.5.0 00:03:47.965 SO libspdk_event_keyring.so.1.0 00:03:47.966 LIB libspdk_event_iobuf.a 00:03:47.966 LIB libspdk_event_vmd.a 00:03:47.966 LIB libspdk_event_scheduler.a 00:03:47.966 SYMLINK libspdk_event_sock.so 00:03:47.966 SYMLINK libspdk_event_keyring.so 00:03:47.966 SO libspdk_event_iobuf.so.3.0 00:03:48.222 SO libspdk_event_scheduler.so.4.0 00:03:48.222 SO libspdk_event_vmd.so.6.0 00:03:48.222 LIB libspdk_event_vhost_blk.a 00:03:48.222 SO libspdk_event_vhost_blk.so.3.0 00:03:48.222 SYMLINK libspdk_event_scheduler.so 00:03:48.222 SYMLINK libspdk_event_iobuf.so 00:03:48.222 SYMLINK libspdk_event_vmd.so 00:03:48.222 SYMLINK libspdk_event_vhost_blk.so 00:03:48.479 CC module/event/subsystems/accel/accel.o 00:03:48.479 LIB libspdk_event_accel.a 00:03:48.737 SO libspdk_event_accel.so.6.0 00:03:48.737 SYMLINK libspdk_event_accel.so 00:03:48.994 CC module/event/subsystems/bdev/bdev.o 00:03:49.252 LIB libspdk_event_bdev.a 00:03:49.252 SO libspdk_event_bdev.so.6.0 00:03:49.252 SYMLINK libspdk_event_bdev.so 00:03:49.509 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:49.509 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:49.509 CC module/event/subsystems/nbd/nbd.o 00:03:49.509 CC module/event/subsystems/scsi/scsi.o 00:03:49.509 CC module/event/subsystems/ublk/ublk.o 00:03:49.767 LIB libspdk_event_ublk.a 00:03:49.767 LIB libspdk_event_nbd.a 00:03:49.767 SO libspdk_event_ublk.so.3.0 00:03:49.767 LIB libspdk_event_scsi.a 00:03:49.767 SO libspdk_event_nbd.so.6.0 00:03:49.767 SO libspdk_event_scsi.so.6.0 00:03:49.767 SYMLINK libspdk_event_ublk.so 00:03:49.767 SYMLINK libspdk_event_nbd.so 00:03:49.767 LIB libspdk_event_nvmf.a 00:03:49.767 SYMLINK libspdk_event_scsi.so 00:03:49.767 SO libspdk_event_nvmf.so.6.0 00:03:50.025 SYMLINK libspdk_event_nvmf.so 00:03:50.025 CC module/event/subsystems/iscsi/iscsi.o 00:03:50.025 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:50.283 LIB libspdk_event_vhost_scsi.a 00:03:50.283 LIB libspdk_event_iscsi.a 00:03:50.283 SO libspdk_event_vhost_scsi.so.3.0 00:03:50.283 SO libspdk_event_iscsi.so.6.0 00:03:50.283 SYMLINK libspdk_event_iscsi.so 00:03:50.283 SYMLINK libspdk_event_vhost_scsi.so 00:03:50.541 SO libspdk.so.6.0 00:03:50.541 SYMLINK libspdk.so 00:03:50.799 CC app/trace_record/trace_record.o 00:03:50.799 CC app/spdk_lspci/spdk_lspci.o 00:03:50.799 CC app/spdk_nvme_perf/perf.o 00:03:50.799 CC app/spdk_nvme_identify/identify.o 00:03:50.799 CXX app/trace/trace.o 00:03:50.799 CC app/iscsi_tgt/iscsi_tgt.o 00:03:50.799 CC app/nvmf_tgt/nvmf_main.o 00:03:50.799 CC app/spdk_tgt/spdk_tgt.o 00:03:50.799 CC examples/accel/perf/accel_perf.o 00:03:51.056 CC test/accel/dif/dif.o 00:03:51.056 LINK spdk_trace_record 00:03:51.056 LINK iscsi_tgt 00:03:51.056 LINK spdk_lspci 00:03:51.056 LINK nvmf_tgt 00:03:51.314 LINK spdk_tgt 00:03:51.314 CC app/spdk_nvme_discover/discovery_aer.o 00:03:51.314 LINK dif 00:03:51.571 LINK spdk_trace 00:03:51.571 CC examples/bdev/hello_world/hello_bdev.o 00:03:51.571 CC app/spdk_top/spdk_top.o 00:03:51.571 LINK spdk_nvme_discover 00:03:51.571 CC examples/bdev/bdevperf/bdevperf.o 00:03:51.571 CC app/vhost/vhost.o 00:03:51.829 LINK spdk_nvme_identify 00:03:51.829 LINK accel_perf 00:03:51.829 LINK hello_bdev 00:03:51.829 CC test/app/bdev_svc/bdev_svc.o 00:03:51.829 LINK vhost 00:03:52.087 LINK spdk_nvme_perf 00:03:52.087 CC examples/ioat/perf/perf.o 00:03:52.087 CC examples/nvme/hello_world/hello_world.o 00:03:52.087 CC examples/blob/hello_world/hello_blob.o 00:03:52.087 LINK bdev_svc 00:03:52.087 CC test/bdev/bdevio/bdevio.o 00:03:52.087 CC examples/blob/cli/blobcli.o 00:03:52.345 LINK hello_world 00:03:52.345 LINK hello_blob 00:03:52.345 CC examples/sock/hello_world/hello_sock.o 00:03:52.345 LINK ioat_perf 00:03:52.345 CC app/spdk_dd/spdk_dd.o 00:03:52.602 LINK spdk_top 00:03:52.602 LINK bdevio 00:03:52.602 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:52.602 LINK hello_sock 00:03:52.602 CC examples/ioat/verify/verify.o 00:03:52.602 CC examples/nvme/reconnect/reconnect.o 00:03:52.602 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:52.602 LINK blobcli 00:03:52.860 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:52.860 LINK bdevperf 00:03:52.860 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:52.860 LINK verify 00:03:52.860 LINK spdk_dd 00:03:53.117 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:53.117 LINK reconnect 00:03:53.117 LINK nvme_fuzz 00:03:53.117 CC examples/vmd/lsvmd/lsvmd.o 00:03:53.117 CC app/fio/nvme/fio_plugin.o 00:03:53.117 CC examples/nvme/arbitration/arbitration.o 00:03:53.375 CC test/blobfs/mkfs/mkfs.o 00:03:53.375 LINK lsvmd 00:03:53.375 CC examples/nvme/hotplug/hotplug.o 00:03:53.375 LINK vhost_fuzz 00:03:53.375 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:53.375 CC examples/nvme/abort/abort.o 00:03:53.375 LINK mkfs 00:03:53.632 CC examples/vmd/led/led.o 00:03:53.632 LINK hotplug 00:03:53.632 LINK nvme_manage 00:03:53.632 LINK arbitration 00:03:53.632 LINK cmb_copy 00:03:53.632 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:53.632 LINK spdk_nvme 00:03:53.632 LINK led 00:03:53.890 LINK pmr_persistence 00:03:53.890 LINK abort 00:03:53.890 CC test/app/histogram_perf/histogram_perf.o 00:03:53.890 CC test/app/stub/stub.o 00:03:53.890 CC test/app/jsoncat/jsoncat.o 00:03:53.890 CC app/fio/bdev/fio_plugin.o 00:03:53.890 CC examples/nvmf/nvmf/nvmf.o 00:03:53.890 CC examples/util/zipf/zipf.o 00:03:53.890 LINK histogram_perf 00:03:53.890 LINK jsoncat 00:03:53.890 TEST_HEADER include/spdk/accel.h 00:03:53.890 TEST_HEADER include/spdk/accel_module.h 00:03:53.890 TEST_HEADER include/spdk/assert.h 00:03:53.890 TEST_HEADER include/spdk/barrier.h 00:03:53.890 TEST_HEADER include/spdk/base64.h 00:03:53.890 TEST_HEADER include/spdk/bdev.h 00:03:54.149 TEST_HEADER include/spdk/bdev_module.h 00:03:54.149 TEST_HEADER include/spdk/bdev_zone.h 00:03:54.149 TEST_HEADER include/spdk/bit_array.h 00:03:54.149 TEST_HEADER include/spdk/bit_pool.h 00:03:54.149 TEST_HEADER include/spdk/blob_bdev.h 00:03:54.149 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:54.149 TEST_HEADER include/spdk/blobfs.h 00:03:54.149 TEST_HEADER include/spdk/blob.h 00:03:54.149 TEST_HEADER include/spdk/conf.h 00:03:54.149 TEST_HEADER include/spdk/config.h 00:03:54.149 TEST_HEADER include/spdk/cpuset.h 00:03:54.149 LINK stub 00:03:54.149 TEST_HEADER include/spdk/crc16.h 00:03:54.149 TEST_HEADER include/spdk/crc32.h 00:03:54.149 TEST_HEADER include/spdk/crc64.h 00:03:54.149 TEST_HEADER include/spdk/dif.h 00:03:54.149 TEST_HEADER include/spdk/dma.h 00:03:54.149 TEST_HEADER include/spdk/endian.h 00:03:54.149 TEST_HEADER include/spdk/env_dpdk.h 00:03:54.149 TEST_HEADER include/spdk/env.h 00:03:54.149 TEST_HEADER include/spdk/event.h 00:03:54.149 TEST_HEADER include/spdk/fd_group.h 00:03:54.149 TEST_HEADER include/spdk/fd.h 00:03:54.149 TEST_HEADER include/spdk/file.h 00:03:54.149 TEST_HEADER include/spdk/ftl.h 00:03:54.149 TEST_HEADER include/spdk/gpt_spec.h 00:03:54.149 TEST_HEADER include/spdk/hexlify.h 00:03:54.149 TEST_HEADER include/spdk/histogram_data.h 00:03:54.149 TEST_HEADER include/spdk/idxd.h 00:03:54.149 TEST_HEADER include/spdk/idxd_spec.h 00:03:54.149 TEST_HEADER include/spdk/init.h 00:03:54.149 TEST_HEADER include/spdk/ioat.h 00:03:54.149 TEST_HEADER include/spdk/ioat_spec.h 00:03:54.149 TEST_HEADER include/spdk/iscsi_spec.h 00:03:54.149 TEST_HEADER include/spdk/json.h 00:03:54.149 TEST_HEADER include/spdk/jsonrpc.h 00:03:54.149 TEST_HEADER include/spdk/keyring.h 00:03:54.149 TEST_HEADER include/spdk/keyring_module.h 00:03:54.149 TEST_HEADER include/spdk/likely.h 00:03:54.149 TEST_HEADER include/spdk/log.h 00:03:54.149 TEST_HEADER include/spdk/lvol.h 00:03:54.149 TEST_HEADER include/spdk/memory.h 00:03:54.149 TEST_HEADER include/spdk/mmio.h 00:03:54.149 TEST_HEADER include/spdk/nbd.h 00:03:54.149 TEST_HEADER include/spdk/notify.h 00:03:54.149 TEST_HEADER include/spdk/nvme.h 00:03:54.149 LINK zipf 00:03:54.149 TEST_HEADER include/spdk/nvme_intel.h 00:03:54.149 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:54.149 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:54.149 TEST_HEADER include/spdk/nvme_spec.h 00:03:54.149 TEST_HEADER include/spdk/nvme_zns.h 00:03:54.149 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:54.149 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:54.149 TEST_HEADER include/spdk/nvmf.h 00:03:54.149 TEST_HEADER include/spdk/nvmf_spec.h 00:03:54.149 TEST_HEADER include/spdk/nvmf_transport.h 00:03:54.149 TEST_HEADER include/spdk/opal.h 00:03:54.149 TEST_HEADER include/spdk/opal_spec.h 00:03:54.149 TEST_HEADER include/spdk/pci_ids.h 00:03:54.149 TEST_HEADER include/spdk/pipe.h 00:03:54.149 TEST_HEADER include/spdk/queue.h 00:03:54.149 TEST_HEADER include/spdk/reduce.h 00:03:54.149 TEST_HEADER include/spdk/rpc.h 00:03:54.149 TEST_HEADER include/spdk/scheduler.h 00:03:54.149 TEST_HEADER include/spdk/scsi.h 00:03:54.149 TEST_HEADER include/spdk/scsi_spec.h 00:03:54.149 TEST_HEADER include/spdk/sock.h 00:03:54.149 TEST_HEADER include/spdk/stdinc.h 00:03:54.149 CC test/dma/test_dma/test_dma.o 00:03:54.149 TEST_HEADER include/spdk/string.h 00:03:54.149 TEST_HEADER include/spdk/thread.h 00:03:54.149 TEST_HEADER include/spdk/trace.h 00:03:54.149 TEST_HEADER include/spdk/trace_parser.h 00:03:54.149 TEST_HEADER include/spdk/tree.h 00:03:54.149 TEST_HEADER include/spdk/ublk.h 00:03:54.149 TEST_HEADER include/spdk/util.h 00:03:54.149 TEST_HEADER include/spdk/uuid.h 00:03:54.149 TEST_HEADER include/spdk/version.h 00:03:54.149 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:54.149 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:54.149 CC test/env/mem_callbacks/mem_callbacks.o 00:03:54.149 TEST_HEADER include/spdk/vhost.h 00:03:54.149 TEST_HEADER include/spdk/vmd.h 00:03:54.149 TEST_HEADER include/spdk/xor.h 00:03:54.149 TEST_HEADER include/spdk/zipf.h 00:03:54.149 CXX test/cpp_headers/accel.o 00:03:54.149 CXX test/cpp_headers/accel_module.o 00:03:54.408 LINK nvmf 00:03:54.408 CC test/env/vtophys/vtophys.o 00:03:54.408 CXX test/cpp_headers/assert.o 00:03:54.408 CXX test/cpp_headers/barrier.o 00:03:54.408 LINK spdk_bdev 00:03:54.408 LINK vtophys 00:03:54.408 CXX test/cpp_headers/base64.o 00:03:54.408 LINK iscsi_fuzz 00:03:54.408 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:54.408 CC examples/thread/thread/thread_ex.o 00:03:54.665 CXX test/cpp_headers/bdev.o 00:03:54.665 LINK test_dma 00:03:54.665 CC test/env/memory/memory_ut.o 00:03:54.665 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:54.665 LINK env_dpdk_post_init 00:03:54.665 CC examples/idxd/perf/perf.o 00:03:54.922 LINK mem_callbacks 00:03:54.922 CXX test/cpp_headers/bdev_module.o 00:03:54.922 LINK interrupt_tgt 00:03:54.922 LINK thread 00:03:54.922 CC test/env/pci/pci_ut.o 00:03:55.180 CC test/event/event_perf/event_perf.o 00:03:55.180 LINK idxd_perf 00:03:55.180 CXX test/cpp_headers/bdev_zone.o 00:03:55.180 CC test/nvme/aer/aer.o 00:03:55.180 CC test/lvol/esnap/esnap.o 00:03:55.180 CC test/nvme/reset/reset.o 00:03:55.180 LINK event_perf 00:03:55.439 CXX test/cpp_headers/bit_array.o 00:03:55.439 CC test/nvme/sgl/sgl.o 00:03:55.439 CC test/nvme/e2edp/nvme_dp.o 00:03:55.439 LINK pci_ut 00:03:55.439 LINK aer 00:03:55.439 LINK reset 00:03:55.697 CC test/event/reactor/reactor.o 00:03:55.697 CXX test/cpp_headers/bit_pool.o 00:03:55.697 LINK sgl 00:03:55.697 LINK memory_ut 00:03:55.697 LINK reactor 00:03:55.697 LINK nvme_dp 00:03:55.697 CC test/nvme/overhead/overhead.o 00:03:55.956 CC test/nvme/err_injection/err_injection.o 00:03:55.956 CC test/nvme/startup/startup.o 00:03:55.956 CXX test/cpp_headers/blob_bdev.o 00:03:56.214 CC test/nvme/reserve/reserve.o 00:03:56.214 CC test/rpc_client/rpc_client_test.o 00:03:56.214 LINK err_injection 00:03:56.214 LINK startup 00:03:56.214 CC test/nvme/simple_copy/simple_copy.o 00:03:56.214 CC test/event/reactor_perf/reactor_perf.o 00:03:56.214 LINK overhead 00:03:56.473 CXX test/cpp_headers/blobfs_bdev.o 00:03:56.473 LINK rpc_client_test 00:03:56.473 CC test/event/app_repeat/app_repeat.o 00:03:56.473 LINK reserve 00:03:56.473 LINK simple_copy 00:03:56.473 LINK reactor_perf 00:03:56.473 CXX test/cpp_headers/blobfs.o 00:03:56.731 CC test/nvme/connect_stress/connect_stress.o 00:03:56.731 CC test/nvme/boot_partition/boot_partition.o 00:03:56.731 CC test/event/scheduler/scheduler.o 00:03:56.731 LINK app_repeat 00:03:56.990 CC test/nvme/compliance/nvme_compliance.o 00:03:56.990 CXX test/cpp_headers/blob.o 00:03:56.990 CC test/nvme/fused_ordering/fused_ordering.o 00:03:56.990 LINK connect_stress 00:03:56.990 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:56.990 LINK boot_partition 00:03:56.990 LINK scheduler 00:03:56.990 CXX test/cpp_headers/conf.o 00:03:56.990 CXX test/cpp_headers/config.o 00:03:56.990 CXX test/cpp_headers/cpuset.o 00:03:57.263 CXX test/cpp_headers/crc16.o 00:03:57.263 CXX test/cpp_headers/crc32.o 00:03:57.263 LINK nvme_compliance 00:03:57.263 LINK fused_ordering 00:03:57.263 CXX test/cpp_headers/crc64.o 00:03:57.263 LINK doorbell_aers 00:03:57.263 CC test/thread/poller_perf/poller_perf.o 00:03:57.263 CXX test/cpp_headers/dif.o 00:03:57.530 CC test/nvme/fdp/fdp.o 00:03:57.530 CXX test/cpp_headers/dma.o 00:03:57.530 CXX test/cpp_headers/endian.o 00:03:57.530 CXX test/cpp_headers/env_dpdk.o 00:03:57.530 CC test/nvme/cuse/cuse.o 00:03:57.530 CXX test/cpp_headers/env.o 00:03:57.530 LINK poller_perf 00:03:57.530 CXX test/cpp_headers/event.o 00:03:57.530 CXX test/cpp_headers/fd_group.o 00:03:57.788 CXX test/cpp_headers/fd.o 00:03:57.788 LINK fdp 00:03:57.788 CXX test/cpp_headers/file.o 00:03:57.788 CXX test/cpp_headers/ftl.o 00:03:58.045 CXX test/cpp_headers/gpt_spec.o 00:03:58.045 CXX test/cpp_headers/hexlify.o 00:03:58.045 CXX test/cpp_headers/histogram_data.o 00:03:58.045 CXX test/cpp_headers/idxd.o 00:03:58.045 CXX test/cpp_headers/idxd_spec.o 00:03:58.045 CXX test/cpp_headers/init.o 00:03:58.045 CXX test/cpp_headers/ioat.o 00:03:58.303 CXX test/cpp_headers/ioat_spec.o 00:03:58.303 CXX test/cpp_headers/iscsi_spec.o 00:03:58.303 CXX test/cpp_headers/json.o 00:03:58.303 CXX test/cpp_headers/jsonrpc.o 00:03:58.303 CXX test/cpp_headers/keyring.o 00:03:58.303 CXX test/cpp_headers/keyring_module.o 00:03:58.303 CXX test/cpp_headers/likely.o 00:03:58.303 CXX test/cpp_headers/log.o 00:03:58.560 CXX test/cpp_headers/lvol.o 00:03:58.560 CXX test/cpp_headers/memory.o 00:03:58.560 CXX test/cpp_headers/mmio.o 00:03:58.560 CXX test/cpp_headers/nbd.o 00:03:58.560 CXX test/cpp_headers/notify.o 00:03:58.560 CXX test/cpp_headers/nvme.o 00:03:58.560 CXX test/cpp_headers/nvme_intel.o 00:03:58.560 CXX test/cpp_headers/nvme_ocssd.o 00:03:58.560 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:58.819 CXX test/cpp_headers/nvme_spec.o 00:03:58.819 CXX test/cpp_headers/nvme_zns.o 00:03:58.819 CXX test/cpp_headers/nvmf_cmd.o 00:03:58.819 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:58.819 CXX test/cpp_headers/nvmf.o 00:03:58.819 LINK cuse 00:03:58.819 CXX test/cpp_headers/nvmf_spec.o 00:03:58.819 CXX test/cpp_headers/nvmf_transport.o 00:03:58.819 CXX test/cpp_headers/opal.o 00:03:59.076 CXX test/cpp_headers/opal_spec.o 00:03:59.076 CXX test/cpp_headers/pci_ids.o 00:03:59.076 CXX test/cpp_headers/pipe.o 00:03:59.076 CXX test/cpp_headers/queue.o 00:03:59.076 CXX test/cpp_headers/reduce.o 00:03:59.076 CXX test/cpp_headers/rpc.o 00:03:59.076 CXX test/cpp_headers/scheduler.o 00:03:59.076 CXX test/cpp_headers/scsi.o 00:03:59.076 CXX test/cpp_headers/scsi_spec.o 00:03:59.076 CXX test/cpp_headers/sock.o 00:03:59.335 CXX test/cpp_headers/stdinc.o 00:03:59.335 CXX test/cpp_headers/string.o 00:03:59.335 CXX test/cpp_headers/thread.o 00:03:59.335 CXX test/cpp_headers/trace.o 00:03:59.335 CXX test/cpp_headers/trace_parser.o 00:03:59.593 CXX test/cpp_headers/tree.o 00:03:59.593 CXX test/cpp_headers/ublk.o 00:03:59.593 CXX test/cpp_headers/util.o 00:03:59.593 CXX test/cpp_headers/uuid.o 00:03:59.593 CXX test/cpp_headers/version.o 00:03:59.593 CXX test/cpp_headers/vfio_user_pci.o 00:03:59.593 CXX test/cpp_headers/vfio_user_spec.o 00:03:59.593 CXX test/cpp_headers/vhost.o 00:03:59.593 CXX test/cpp_headers/vmd.o 00:03:59.852 CXX test/cpp_headers/xor.o 00:03:59.852 CXX test/cpp_headers/zipf.o 00:04:00.787 LINK esnap 00:04:02.687 00:04:02.687 real 1m28.419s 00:04:02.687 user 9m48.393s 00:04:02.687 sys 1m55.069s 00:04:02.687 22:51:14 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:02.687 22:51:14 make -- common/autotest_common.sh@10 -- $ set +x 00:04:02.687 ************************************ 00:04:02.687 END TEST make 00:04:02.687 ************************************ 00:04:02.687 22:51:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:02.687 22:51:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:02.687 22:51:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:02.687 22:51:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.687 22:51:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:02.687 22:51:14 -- pm/common@44 -- $ pid=5182 00:04:02.687 22:51:14 -- pm/common@50 -- $ kill -TERM 5182 00:04:02.687 22:51:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.687 22:51:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:02.687 22:51:14 -- pm/common@44 -- $ pid=5184 00:04:02.687 22:51:14 -- pm/common@50 -- $ kill -TERM 5184 00:04:02.687 22:51:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:02.687 22:51:14 -- nvmf/common.sh@7 -- # uname -s 00:04:02.687 22:51:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.687 22:51:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.687 22:51:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.687 22:51:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.687 22:51:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.687 22:51:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.687 22:51:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.687 22:51:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.687 22:51:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.687 22:51:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.687 22:51:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:04:02.687 22:51:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:04:02.687 22:51:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.687 22:51:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.687 22:51:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:02.687 22:51:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.687 22:51:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:02.687 22:51:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.687 22:51:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.687 22:51:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.687 22:51:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.687 22:51:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.687 22:51:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.687 22:51:15 -- paths/export.sh@5 -- # export PATH 00:04:02.687 22:51:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.687 22:51:15 -- nvmf/common.sh@47 -- # : 0 00:04:02.687 22:51:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:02.687 22:51:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:02.687 22:51:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.687 22:51:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.687 22:51:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.687 22:51:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:02.687 22:51:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:02.687 22:51:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:02.687 22:51:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:02.687 22:51:15 -- spdk/autotest.sh@32 -- # uname -s 00:04:02.687 22:51:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:02.687 22:51:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:02.687 22:51:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:02.687 22:51:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:02.687 22:51:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:02.687 22:51:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:02.687 22:51:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:02.687 22:51:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:02.687 22:51:15 -- spdk/autotest.sh@48 -- # udevadm_pid=54131 00:04:02.687 22:51:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:02.687 22:51:15 -- pm/common@17 -- # local monitor 00:04:02.687 22:51:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.687 22:51:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:02.687 22:51:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.687 22:51:15 -- pm/common@25 -- # sleep 1 00:04:02.687 22:51:15 -- pm/common@21 -- # date +%s 00:04:02.687 22:51:15 -- pm/common@21 -- # date +%s 00:04:02.687 22:51:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715727075 00:04:02.687 22:51:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715727075 00:04:02.945 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715727075_collect-vmstat.pm.log 00:04:02.945 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715727075_collect-cpu-load.pm.log 00:04:03.880 22:51:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:03.880 22:51:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:03.880 22:51:16 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:03.880 22:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.880 22:51:16 -- spdk/autotest.sh@59 -- # create_test_list 00:04:03.880 22:51:16 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:03.880 22:51:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.880 22:51:16 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:03.880 22:51:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:03.880 22:51:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:03.880 22:51:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:03.880 22:51:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:03.880 22:51:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:03.880 22:51:16 -- common/autotest_common.sh@1451 -- # uname 00:04:03.880 22:51:16 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:03.880 22:51:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:03.880 22:51:16 -- common/autotest_common.sh@1471 -- # uname 00:04:03.880 22:51:16 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:03.880 22:51:16 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:03.880 22:51:16 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:03.880 22:51:16 -- spdk/autotest.sh@72 -- # hash lcov 00:04:03.880 22:51:16 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:03.880 22:51:16 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:03.880 --rc lcov_branch_coverage=1 00:04:03.880 --rc lcov_function_coverage=1 00:04:03.880 --rc genhtml_branch_coverage=1 00:04:03.880 --rc genhtml_function_coverage=1 00:04:03.880 --rc genhtml_legend=1 00:04:03.880 --rc geninfo_all_blocks=1 00:04:03.880 ' 00:04:03.880 22:51:16 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:03.880 --rc lcov_branch_coverage=1 00:04:03.880 --rc lcov_function_coverage=1 00:04:03.880 --rc genhtml_branch_coverage=1 00:04:03.880 --rc genhtml_function_coverage=1 00:04:03.880 --rc genhtml_legend=1 00:04:03.880 --rc geninfo_all_blocks=1 00:04:03.880 ' 00:04:03.880 22:51:16 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:03.880 --rc lcov_branch_coverage=1 00:04:03.880 --rc lcov_function_coverage=1 00:04:03.880 --rc genhtml_branch_coverage=1 00:04:03.880 --rc genhtml_function_coverage=1 00:04:03.880 --rc genhtml_legend=1 00:04:03.880 --rc geninfo_all_blocks=1 00:04:03.880 --no-external' 00:04:03.880 22:51:16 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:03.880 --rc lcov_branch_coverage=1 00:04:03.880 --rc lcov_function_coverage=1 00:04:03.880 --rc genhtml_branch_coverage=1 00:04:03.880 --rc genhtml_function_coverage=1 00:04:03.880 --rc genhtml_legend=1 00:04:03.880 --rc geninfo_all_blocks=1 00:04:03.880 --no-external' 00:04:03.880 22:51:16 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:03.880 lcov: LCOV version 1.14 00:04:03.880 22:51:16 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:13.851 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:13.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:13.851 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:13.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:13.851 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:13.851 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:19.152 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:19.152 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:34.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:34.027 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:34.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:34.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:36.560 22:51:48 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:36.560 22:51:48 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:36.560 22:51:48 -- common/autotest_common.sh@10 -- # set +x 00:04:36.560 22:51:48 -- spdk/autotest.sh@91 -- # rm -f 00:04:36.560 22:51:48 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.125 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:37.383 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:37.383 22:51:49 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:37.383 22:51:49 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:37.383 22:51:49 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:37.383 22:51:49 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:37.383 22:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:37.383 22:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:37.383 22:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:37.383 22:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.383 22:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:37.383 22:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:37.384 22:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:37.384 22:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:37.384 22:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:37.384 22:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:37.384 22:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:37.384 22:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:04:37.384 22:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:04:37.384 22:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:37.384 22:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:37.384 22:51:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:37.384 22:51:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:04:37.384 22:51:49 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:04:37.384 22:51:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:37.384 22:51:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:37.384 22:51:49 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:37.384 22:51:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.384 22:51:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:37.384 22:51:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:37.384 22:51:49 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:37.384 22:51:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:37.384 No valid GPT data, bailing 00:04:37.384 22:51:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:37.384 22:51:49 -- scripts/common.sh@391 -- # pt= 00:04:37.384 22:51:49 -- scripts/common.sh@392 -- # return 1 00:04:37.384 22:51:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:37.384 1+0 records in 00:04:37.384 1+0 records out 00:04:37.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00401588 s, 261 MB/s 00:04:37.384 22:51:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.384 22:51:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:37.384 22:51:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:37.384 22:51:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:37.384 22:51:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:37.384 No valid GPT data, bailing 00:04:37.384 22:51:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:37.384 22:51:49 -- scripts/common.sh@391 -- # pt= 00:04:37.384 22:51:49 -- scripts/common.sh@392 -- # return 1 00:04:37.384 22:51:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:37.384 1+0 records in 00:04:37.384 1+0 records out 00:04:37.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00347944 s, 301 MB/s 00:04:37.384 22:51:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.384 22:51:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:37.384 22:51:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:37.384 22:51:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:37.384 22:51:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:37.384 No valid GPT data, bailing 00:04:37.384 22:51:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:37.645 22:51:49 -- scripts/common.sh@391 -- # pt= 00:04:37.645 22:51:49 -- scripts/common.sh@392 -- # return 1 00:04:37.645 22:51:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:37.645 1+0 records in 00:04:37.645 1+0 records out 00:04:37.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00364138 s, 288 MB/s 00:04:37.645 22:51:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.645 22:51:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:37.645 22:51:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:37.645 22:51:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:37.645 22:51:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:37.645 No valid GPT data, bailing 00:04:37.645 22:51:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:37.645 22:51:49 -- scripts/common.sh@391 -- # pt= 00:04:37.645 22:51:49 -- scripts/common.sh@392 -- # return 1 00:04:37.645 22:51:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:37.645 1+0 records in 00:04:37.645 1+0 records out 00:04:37.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047934 s, 219 MB/s 00:04:37.645 22:51:49 -- spdk/autotest.sh@118 -- # sync 00:04:37.645 22:51:49 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:37.645 22:51:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:37.645 22:51:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:39.553 22:51:51 -- spdk/autotest.sh@124 -- # uname -s 00:04:39.553 22:51:51 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:39.553 22:51:51 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:39.553 22:51:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.553 22:51:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.553 22:51:51 -- common/autotest_common.sh@10 -- # set +x 00:04:39.553 ************************************ 00:04:39.553 START TEST setup.sh 00:04:39.553 ************************************ 00:04:39.553 22:51:51 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:39.554 * Looking for test storage... 00:04:39.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.554 22:51:51 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:39.554 22:51:51 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:39.554 22:51:51 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.554 22:51:51 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.554 22:51:51 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.554 22:51:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.554 ************************************ 00:04:39.554 START TEST acl 00:04:39.554 ************************************ 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:39.554 * Looking for test storage... 00:04:39.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.554 22:51:51 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:39.554 22:51:51 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:39.554 22:51:51 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:39.554 22:51:51 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:39.554 22:51:51 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:39.554 22:51:51 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:39.554 22:51:51 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:39.554 22:51:51 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.554 22:51:51 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.488 22:51:52 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:40.488 22:51:52 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:40.488 22:51:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.488 22:51:52 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:40.488 22:51:52 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.488 22:51:52 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.054 Hugepages 00:04:41.054 node hugesize free / total 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.054 00:04:41.054 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.054 22:51:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.312 22:51:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:41.312 22:51:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.312 22:51:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:41.312 22:51:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.312 22:51:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.312 22:51:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.312 22:51:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:41.312 22:51:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:41.312 22:51:53 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.312 22:51:53 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.312 22:51:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:41.312 ************************************ 00:04:41.312 START TEST denied 00:04:41.312 ************************************ 00:04:41.312 22:51:53 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:41.312 22:51:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:41.312 22:51:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:41.312 22:51:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.312 22:51:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.312 22:51:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:42.245 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:42.245 22:51:54 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:42.245 22:51:54 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:42.245 22:51:54 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:42.245 22:51:54 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:42.245 22:51:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:42.245 22:51:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:42.245 22:51:54 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:42.245 22:51:54 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:42.245 22:51:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.245 22:51:54 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.503 00:04:42.503 real 0m1.395s 00:04:42.503 user 0m0.568s 00:04:42.503 sys 0m0.768s 00:04:42.503 22:51:54 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.503 ************************************ 00:04:42.503 22:51:54 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:42.503 END TEST denied 00:04:42.503 ************************************ 00:04:42.762 22:51:54 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:42.762 22:51:54 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.762 22:51:54 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.762 22:51:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:42.762 ************************************ 00:04:42.762 START TEST allowed 00:04:42.762 ************************************ 00:04:42.762 22:51:54 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:42.762 22:51:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:42.762 22:51:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:42.762 22:51:54 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:42.762 22:51:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.762 22:51:54 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.698 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.698 22:51:55 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:43.698 22:51:55 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:43.698 22:51:55 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:43.698 22:51:55 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:43.698 22:51:55 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:43.698 22:51:55 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:43.698 22:51:55 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:43.698 22:51:55 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:43.698 22:51:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.698 22:51:55 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.265 00:04:44.265 real 0m1.601s 00:04:44.265 user 0m0.730s 00:04:44.265 sys 0m0.852s 00:04:44.265 22:51:56 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.265 22:51:56 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:44.265 ************************************ 00:04:44.265 END TEST allowed 00:04:44.265 ************************************ 00:04:44.265 00:04:44.265 real 0m4.757s 00:04:44.265 user 0m2.133s 00:04:44.265 sys 0m2.553s 00:04:44.265 22:51:56 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.265 22:51:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:44.265 ************************************ 00:04:44.265 END TEST acl 00:04:44.265 ************************************ 00:04:44.265 22:51:56 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:44.265 22:51:56 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.265 22:51:56 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.265 22:51:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:44.265 ************************************ 00:04:44.265 START TEST hugepages 00:04:44.265 ************************************ 00:04:44.265 22:51:56 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:44.526 * Looking for test storage... 00:04:44.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5477012 kB' 'MemAvailable: 7404212 kB' 'Buffers: 2436 kB' 'Cached: 2136976 kB' 'SwapCached: 0 kB' 'Active: 873808 kB' 'Inactive: 1369576 kB' 'Active(anon): 114460 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 105604 kB' 'Mapped: 48548 kB' 'Shmem: 10488 kB' 'KReclaimable: 70432 kB' 'Slab: 144268 kB' 'SReclaimable: 70432 kB' 'SUnreclaim: 73836 kB' 'KernelStack: 6400 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.526 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.527 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:44.528 22:51:56 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:44.528 22:51:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.528 22:51:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.528 22:51:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.528 ************************************ 00:04:44.528 START TEST default_setup 00:04:44.528 ************************************ 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.528 22:51:56 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.095 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.095 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.358 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7559528 kB' 'MemAvailable: 9486564 kB' 'Buffers: 2436 kB' 'Cached: 2136964 kB' 'SwapCached: 0 kB' 'Active: 890548 kB' 'Inactive: 1369580 kB' 'Active(anon): 131200 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 70092 kB' 'Slab: 143992 kB' 'SReclaimable: 70092 kB' 'SUnreclaim: 73900 kB' 'KernelStack: 6256 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.358 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.359 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7559276 kB' 'MemAvailable: 9486312 kB' 'Buffers: 2436 kB' 'Cached: 2136964 kB' 'SwapCached: 0 kB' 'Active: 889984 kB' 'Inactive: 1369580 kB' 'Active(anon): 130636 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121772 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 70092 kB' 'Slab: 143996 kB' 'SReclaimable: 70092 kB' 'SUnreclaim: 73904 kB' 'KernelStack: 6288 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.360 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.361 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7559276 kB' 'MemAvailable: 9486312 kB' 'Buffers: 2436 kB' 'Cached: 2136964 kB' 'SwapCached: 0 kB' 'Active: 890244 kB' 'Inactive: 1369580 kB' 'Active(anon): 130896 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122032 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 70092 kB' 'Slab: 143996 kB' 'SReclaimable: 70092 kB' 'SUnreclaim: 73904 kB' 'KernelStack: 6288 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.362 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.363 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:45.364 nr_hugepages=1024 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.364 resv_hugepages=0 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.364 surplus_hugepages=0 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.364 anon_hugepages=0 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7559024 kB' 'MemAvailable: 9486060 kB' 'Buffers: 2436 kB' 'Cached: 2136964 kB' 'SwapCached: 0 kB' 'Active: 890164 kB' 'Inactive: 1369580 kB' 'Active(anon): 130816 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121948 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 70092 kB' 'Slab: 143988 kB' 'SReclaimable: 70092 kB' 'SUnreclaim: 73896 kB' 'KernelStack: 6288 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.364 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.365 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7559024 kB' 'MemUsed: 4682956 kB' 'SwapCached: 0 kB' 'Active: 890168 kB' 'Inactive: 1369580 kB' 'Active(anon): 130820 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2139400 kB' 'Mapped: 48636 kB' 'AnonPages: 121948 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70108 kB' 'Slab: 144004 kB' 'SReclaimable: 70108 kB' 'SUnreclaim: 73896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.366 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.625 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.626 node0=1024 expecting 1024 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.626 00:04:45.626 real 0m1.009s 00:04:45.626 user 0m0.475s 00:04:45.626 sys 0m0.460s 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.626 22:51:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:45.626 ************************************ 00:04:45.626 END TEST default_setup 00:04:45.626 ************************************ 00:04:45.626 22:51:57 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:45.626 22:51:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.626 22:51:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.626 22:51:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.626 ************************************ 00:04:45.626 START TEST per_node_1G_alloc 00:04:45.626 ************************************ 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.626 22:51:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.888 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.888 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8613736 kB' 'MemAvailable: 10540784 kB' 'Buffers: 2436 kB' 'Cached: 2136964 kB' 'SwapCached: 0 kB' 'Active: 890356 kB' 'Inactive: 1369588 kB' 'Active(anon): 131008 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122120 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 143984 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 73880 kB' 'KernelStack: 6276 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.888 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.889 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8613736 kB' 'MemAvailable: 10540784 kB' 'Buffers: 2436 kB' 'Cached: 2136964 kB' 'SwapCached: 0 kB' 'Active: 890052 kB' 'Inactive: 1369588 kB' 'Active(anon): 130704 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122108 kB' 'Mapped: 48768 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 143980 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 73876 kB' 'KernelStack: 6244 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.890 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.891 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8613736 kB' 'MemAvailable: 10540784 kB' 'Buffers: 2436 kB' 'Cached: 2136964 kB' 'SwapCached: 0 kB' 'Active: 890004 kB' 'Inactive: 1369588 kB' 'Active(anon): 130656 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122056 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 143980 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 73876 kB' 'KernelStack: 6304 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.892 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.893 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:45.894 nr_hugepages=512 00:04:45.894 resv_hugepages=0 00:04:45.894 surplus_hugepages=0 00:04:45.894 anon_hugepages=0 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.894 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.154 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8613736 kB' 'MemAvailable: 10540784 kB' 'Buffers: 2436 kB' 'Cached: 2136964 kB' 'SwapCached: 0 kB' 'Active: 889988 kB' 'Inactive: 1369588 kB' 'Active(anon): 130640 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122008 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 143980 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 73876 kB' 'KernelStack: 6288 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.155 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8613484 kB' 'MemUsed: 3628496 kB' 'SwapCached: 0 kB' 'Active: 890268 kB' 'Inactive: 1369588 kB' 'Active(anon): 130920 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 2139400 kB' 'Mapped: 48636 kB' 'AnonPages: 122064 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70104 kB' 'Slab: 143980 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 73876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.156 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.157 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.158 node0=512 expecting 512 00:04:46.158 ************************************ 00:04:46.158 END TEST per_node_1G_alloc 00:04:46.158 ************************************ 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.158 00:04:46.158 real 0m0.547s 00:04:46.158 user 0m0.260s 00:04:46.158 sys 0m0.294s 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.158 22:51:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.158 22:51:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:46.158 22:51:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.158 22:51:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.158 22:51:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.158 ************************************ 00:04:46.158 START TEST even_2G_alloc 00:04:46.158 ************************************ 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.158 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.417 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.417 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7562568 kB' 'MemAvailable: 9489620 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890512 kB' 'Inactive: 1369592 kB' 'Active(anon): 131164 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122272 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144108 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6276 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.417 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.418 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.681 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.681 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.681 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7562568 kB' 'MemAvailable: 9489620 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890268 kB' 'Inactive: 1369592 kB' 'Active(anon): 130920 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122028 kB' 'Mapped: 48640 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144112 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74008 kB' 'KernelStack: 6288 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.682 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.683 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7562568 kB' 'MemAvailable: 9489620 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890056 kB' 'Inactive: 1369592 kB' 'Active(anon): 130708 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122096 kB' 'Mapped: 48640 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144108 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6304 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.684 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.685 nr_hugepages=1024 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.685 resv_hugepages=0 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.685 surplus_hugepages=0 00:04:46.685 anon_hugepages=0 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7562568 kB' 'MemAvailable: 9489620 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890040 kB' 'Inactive: 1369592 kB' 'Active(anon): 130692 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121824 kB' 'Mapped: 48640 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144100 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 73996 kB' 'KernelStack: 6288 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.685 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.686 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7562568 kB' 'MemUsed: 4679412 kB' 'SwapCached: 0 kB' 'Active: 890500 kB' 'Inactive: 1369592 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2139404 kB' 'Mapped: 48640 kB' 'AnonPages: 122304 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70104 kB' 'Slab: 144092 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 73988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.687 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.688 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.689 node0=1024 expecting 1024 00:04:46.689 ************************************ 00:04:46.689 END TEST even_2G_alloc 00:04:46.689 ************************************ 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.689 00:04:46.689 real 0m0.562s 00:04:46.689 user 0m0.273s 00:04:46.689 sys 0m0.281s 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.689 22:51:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.689 22:51:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:46.689 22:51:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.689 22:51:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.689 22:51:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.689 ************************************ 00:04:46.689 START TEST odd_alloc 00:04:46.689 ************************************ 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.689 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.263 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.263 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7558756 kB' 'MemAvailable: 9485808 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890668 kB' 'Inactive: 1369592 kB' 'Active(anon): 131320 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122452 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144120 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74016 kB' 'KernelStack: 6320 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.263 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.264 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7558756 kB' 'MemAvailable: 9485808 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890452 kB' 'Inactive: 1369592 kB' 'Active(anon): 131104 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122188 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144140 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74036 kB' 'KernelStack: 6348 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.265 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.266 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7558756 kB' 'MemAvailable: 9485808 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890468 kB' 'Inactive: 1369592 kB' 'Active(anon): 131120 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122504 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144140 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74036 kB' 'KernelStack: 6364 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.267 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.268 nr_hugepages=1025 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:47.268 resv_hugepages=0 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.268 surplus_hugepages=0 00:04:47.268 anon_hugepages=0 00:04:47.268 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7558756 kB' 'MemAvailable: 9485808 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890156 kB' 'Inactive: 1369592 kB' 'Active(anon): 130808 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144124 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74020 kB' 'KernelStack: 6316 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.269 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7559440 kB' 'MemUsed: 4682540 kB' 'SwapCached: 0 kB' 'Active: 890128 kB' 'Inactive: 1369592 kB' 'Active(anon): 130780 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2139404 kB' 'Mapped: 48832 kB' 'AnonPages: 122208 kB' 'Shmem: 10464 kB' 'KernelStack: 6332 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70104 kB' 'Slab: 144120 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.270 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.271 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:47.272 node0=1025 expecting 1025 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:47.272 00:04:47.272 real 0m0.561s 00:04:47.272 user 0m0.291s 00:04:47.272 sys 0m0.273s 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.272 22:51:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.272 ************************************ 00:04:47.272 END TEST odd_alloc 00:04:47.272 ************************************ 00:04:47.272 22:51:59 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:47.272 22:51:59 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.272 22:51:59 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.272 22:51:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.272 ************************************ 00:04:47.272 START TEST custom_alloc 00:04:47.272 ************************************ 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.272 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.845 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.845 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8623892 kB' 'MemAvailable: 10550944 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890816 kB' 'Inactive: 1369592 kB' 'Active(anon): 131468 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122876 kB' 'Mapped: 49160 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144152 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74048 kB' 'KernelStack: 6292 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:51:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.845 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.846 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8623892 kB' 'MemAvailable: 10550944 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890284 kB' 'Inactive: 1369592 kB' 'Active(anon): 130936 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121852 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144152 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74048 kB' 'KernelStack: 6260 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.847 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8623892 kB' 'MemAvailable: 10550944 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890300 kB' 'Inactive: 1369592 kB' 'Active(anon): 130952 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122120 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144152 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74048 kB' 'KernelStack: 6260 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.848 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.849 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.850 nr_hugepages=512 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:47.850 resv_hugepages=0 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.850 surplus_hugepages=0 00:04:47.850 anon_hugepages=0 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.850 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8623416 kB' 'MemAvailable: 10550468 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 890332 kB' 'Inactive: 1369592 kB' 'Active(anon): 130984 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122120 kB' 'Mapped: 48640 kB' 'Shmem: 10464 kB' 'KReclaimable: 70104 kB' 'Slab: 144144 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74040 kB' 'KernelStack: 6304 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.851 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8623416 kB' 'MemUsed: 3618564 kB' 'SwapCached: 0 kB' 'Active: 890296 kB' 'Inactive: 1369592 kB' 'Active(anon): 130948 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2139404 kB' 'Mapped: 48640 kB' 'AnonPages: 122124 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70104 kB' 'Slab: 144144 kB' 'SReclaimable: 70104 kB' 'SUnreclaim: 74040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.852 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.853 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.854 node0=512 expecting 512 00:04:47.854 ************************************ 00:04:47.854 END TEST custom_alloc 00:04:47.854 ************************************ 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:47.854 00:04:47.854 real 0m0.561s 00:04:47.854 user 0m0.275s 00:04:47.854 sys 0m0.284s 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.854 22:52:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.854 22:52:00 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:47.854 22:52:00 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.854 22:52:00 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.854 22:52:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.854 ************************************ 00:04:47.854 START TEST no_shrink_alloc 00:04:47.854 ************************************ 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.854 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.426 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.426 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.426 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.426 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7572904 kB' 'MemAvailable: 9499948 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 885872 kB' 'Inactive: 1369592 kB' 'Active(anon): 126524 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117668 kB' 'Mapped: 48152 kB' 'Shmem: 10464 kB' 'KReclaimable: 70084 kB' 'Slab: 144004 kB' 'SReclaimable: 70084 kB' 'SUnreclaim: 73920 kB' 'KernelStack: 6224 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.427 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7572904 kB' 'MemAvailable: 9499948 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 885660 kB' 'Inactive: 1369592 kB' 'Active(anon): 126312 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117440 kB' 'Mapped: 47960 kB' 'Shmem: 10464 kB' 'KReclaimable: 70084 kB' 'Slab: 143928 kB' 'SReclaimable: 70084 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6204 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.428 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7572904 kB' 'MemAvailable: 9499948 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 885408 kB' 'Inactive: 1369592 kB' 'Active(anon): 126060 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117180 kB' 'Mapped: 47960 kB' 'Shmem: 10464 kB' 'KReclaimable: 70084 kB' 'Slab: 143928 kB' 'SReclaimable: 70084 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6204 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.430 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.431 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.432 nr_hugepages=1024 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.432 resv_hugepages=0 00:04:48.432 surplus_hugepages=0 00:04:48.432 anon_hugepages=0 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7572656 kB' 'MemAvailable: 9499700 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 885472 kB' 'Inactive: 1369592 kB' 'Active(anon): 126124 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117252 kB' 'Mapped: 47960 kB' 'Shmem: 10464 kB' 'KReclaimable: 70084 kB' 'Slab: 143928 kB' 'SReclaimable: 70084 kB' 'SUnreclaim: 73844 kB' 'KernelStack: 6220 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.432 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.433 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7572404 kB' 'MemUsed: 4669576 kB' 'SwapCached: 0 kB' 'Active: 885204 kB' 'Inactive: 1369592 kB' 'Active(anon): 125856 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 2139404 kB' 'Mapped: 47960 kB' 'AnonPages: 117020 kB' 'Shmem: 10464 kB' 'KernelStack: 6236 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70084 kB' 'Slab: 143928 kB' 'SReclaimable: 70084 kB' 'SUnreclaim: 73844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.434 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.435 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.695 node0=1024 expecting 1024 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.695 22:52:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.958 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.958 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.958 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.958 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7573672 kB' 'MemAvailable: 9500716 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 886580 kB' 'Inactive: 1369592 kB' 'Active(anon): 127232 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 118400 kB' 'Mapped: 48072 kB' 'Shmem: 10464 kB' 'KReclaimable: 70084 kB' 'Slab: 143836 kB' 'SReclaimable: 70084 kB' 'SUnreclaim: 73752 kB' 'KernelStack: 6308 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.959 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7573924 kB' 'MemAvailable: 9500968 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 885600 kB' 'Inactive: 1369592 kB' 'Active(anon): 126252 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 117360 kB' 'Mapped: 47900 kB' 'Shmem: 10464 kB' 'KReclaimable: 70084 kB' 'Slab: 143832 kB' 'SReclaimable: 70084 kB' 'SUnreclaim: 73748 kB' 'KernelStack: 6192 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.960 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.961 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7573924 kB' 'MemAvailable: 9500968 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 885324 kB' 'Inactive: 1369592 kB' 'Active(anon): 125976 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 117104 kB' 'Mapped: 47900 kB' 'Shmem: 10464 kB' 'KReclaimable: 70084 kB' 'Slab: 143832 kB' 'SReclaimable: 70084 kB' 'SUnreclaim: 73748 kB' 'KernelStack: 6192 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54516 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.962 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.963 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.964 nr_hugepages=1024 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.964 resv_hugepages=0 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.964 surplus_hugepages=0 00:04:48.964 anon_hugepages=0 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7573924 kB' 'MemAvailable: 9500968 kB' 'Buffers: 2436 kB' 'Cached: 2136968 kB' 'SwapCached: 0 kB' 'Active: 885584 kB' 'Inactive: 1369592 kB' 'Active(anon): 126236 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 117364 kB' 'Mapped: 47900 kB' 'Shmem: 10464 kB' 'KReclaimable: 70084 kB' 'Slab: 143832 kB' 'SReclaimable: 70084 kB' 'SUnreclaim: 73748 kB' 'KernelStack: 6192 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.964 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.965 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7573924 kB' 'MemUsed: 4668056 kB' 'SwapCached: 0 kB' 'Active: 885492 kB' 'Inactive: 1369592 kB' 'Active(anon): 126144 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1369592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 2139404 kB' 'Mapped: 47900 kB' 'AnonPages: 117248 kB' 'Shmem: 10464 kB' 'KernelStack: 6176 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70084 kB' 'Slab: 143832 kB' 'SReclaimable: 70084 kB' 'SUnreclaim: 73748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.966 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.225 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.226 node0=1024 expecting 1024 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:49.226 00:04:49.226 real 0m1.133s 00:04:49.226 user 0m0.590s 00:04:49.226 sys 0m0.539s 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.226 22:52:01 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:49.226 ************************************ 00:04:49.226 END TEST no_shrink_alloc 00:04:49.226 ************************************ 00:04:49.226 22:52:01 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:49.226 22:52:01 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:49.226 22:52:01 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.226 22:52:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.226 22:52:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.227 22:52:01 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.227 22:52:01 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:49.227 22:52:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:49.227 22:52:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:49.227 00:04:49.227 real 0m4.801s 00:04:49.227 user 0m2.328s 00:04:49.227 sys 0m2.377s 00:04:49.227 22:52:01 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.227 22:52:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.227 ************************************ 00:04:49.227 END TEST hugepages 00:04:49.227 ************************************ 00:04:49.227 22:52:01 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:49.227 22:52:01 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.227 22:52:01 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.227 22:52:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.227 ************************************ 00:04:49.227 START TEST driver 00:04:49.227 ************************************ 00:04:49.227 22:52:01 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:49.227 * Looking for test storage... 00:04:49.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:49.227 22:52:01 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:49.227 22:52:01 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.227 22:52:01 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.792 22:52:02 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:49.792 22:52:02 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.792 22:52:02 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.792 22:52:02 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:49.792 ************************************ 00:04:49.792 START TEST guess_driver 00:04:49.792 ************************************ 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:49.792 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:49.792 Looking for driver=uio_pci_generic 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.792 22:52:02 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.725 22:52:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.290 00:04:51.290 real 0m1.444s 00:04:51.290 user 0m0.548s 00:04:51.290 sys 0m0.904s 00:04:51.290 22:52:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.290 22:52:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:51.290 ************************************ 00:04:51.290 END TEST guess_driver 00:04:51.290 ************************************ 00:04:51.290 00:04:51.290 real 0m2.119s 00:04:51.290 user 0m0.780s 00:04:51.290 sys 0m1.401s 00:04:51.290 ************************************ 00:04:51.290 END TEST driver 00:04:51.290 ************************************ 00:04:51.290 22:52:03 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.290 22:52:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:51.291 22:52:03 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:51.291 22:52:03 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.291 22:52:03 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.291 22:52:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.291 ************************************ 00:04:51.291 START TEST devices 00:04:51.291 ************************************ 00:04:51.291 22:52:03 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:51.548 * Looking for test storage... 00:04:51.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:51.548 22:52:03 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:51.548 22:52:03 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:51.548 22:52:03 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.548 22:52:03 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:52.115 22:52:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:52.115 22:52:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:52.115 22:52:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:52.115 No valid GPT data, bailing 00:04:52.115 22:52:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:52.115 22:52:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:52.115 22:52:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:52.115 22:52:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:52.115 22:52:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:52.115 22:52:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:52.115 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:52.115 22:52:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:52.115 22:52:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:52.374 No valid GPT data, bailing 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:52.374 22:52:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:52.374 22:52:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:52.374 22:52:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:52.374 No valid GPT data, bailing 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:52.374 22:52:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:52.374 22:52:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:52.374 22:52:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:52.374 No valid GPT data, bailing 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:52.374 22:52:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:52.374 22:52:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:52.374 22:52:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:52.374 22:52:04 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:52.374 22:52:04 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:52.374 22:52:04 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.374 22:52:04 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.374 22:52:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.374 ************************************ 00:04:52.374 START TEST nvme_mount 00:04:52.374 ************************************ 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:52.374 22:52:04 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:53.748 Creating new GPT entries in memory. 00:04:53.748 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.748 other utilities. 00:04:53.748 22:52:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.748 22:52:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.748 22:52:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.748 22:52:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.748 22:52:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:54.682 Creating new GPT entries in memory. 00:04:54.682 The operation has completed successfully. 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58342 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.682 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.683 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:54.683 22:52:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.683 22:52:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.683 22:52:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.683 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.683 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:54.683 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:54.683 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.683 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.683 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:54.943 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.943 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.203 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.203 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.203 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.203 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:55.203 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:55.203 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:55.203 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.461 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:55.461 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.462 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.722 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.722 22:52:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.722 22:52:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.981 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.981 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:55.981 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.981 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.981 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.981 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.239 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.239 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.239 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.239 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.497 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.497 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.497 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:56.497 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:56.497 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.497 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.497 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.498 22:52:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.498 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.498 00:04:56.498 real 0m3.960s 00:04:56.498 user 0m0.684s 00:04:56.498 sys 0m0.992s 00:04:56.498 22:52:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.498 22:52:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:56.498 ************************************ 00:04:56.498 END TEST nvme_mount 00:04:56.498 ************************************ 00:04:56.498 22:52:08 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:56.498 22:52:08 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.498 22:52:08 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.498 22:52:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:56.498 ************************************ 00:04:56.498 START TEST dm_mount 00:04:56.498 ************************************ 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:56.498 22:52:08 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:57.471 Creating new GPT entries in memory. 00:04:57.471 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.471 other utilities. 00:04:57.471 22:52:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.471 22:52:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.471 22:52:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.471 22:52:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.471 22:52:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:58.406 Creating new GPT entries in memory. 00:04:58.406 The operation has completed successfully. 00:04:58.406 22:52:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:58.406 22:52:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.406 22:52:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.406 22:52:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.406 22:52:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:59.782 The operation has completed successfully. 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 58770 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.782 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.783 22:52:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.783 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:59.783 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:59.783 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:59.783 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.783 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:59.783 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.041 22:52:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.299 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.299 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:00.299 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:00.299 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.299 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.299 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:00.556 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.556 22:52:12 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:00.814 00:05:00.814 real 0m4.227s 00:05:00.814 user 0m0.456s 00:05:00.814 sys 0m0.733s 00:05:00.814 22:52:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.814 22:52:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:00.814 ************************************ 00:05:00.814 END TEST dm_mount 00:05:00.814 ************************************ 00:05:00.814 22:52:12 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:00.814 22:52:12 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:00.814 22:52:12 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:00.814 22:52:12 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.814 22:52:12 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:00.814 22:52:13 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.814 22:52:13 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.072 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.072 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.072 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.072 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.072 22:52:13 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.072 22:52:13 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.073 22:52:13 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.073 22:52:13 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.073 22:52:13 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.073 22:52:13 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.073 22:52:13 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.073 00:05:01.073 real 0m9.677s 00:05:01.073 user 0m1.770s 00:05:01.073 sys 0m2.292s 00:05:01.073 22:52:13 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.073 ************************************ 00:05:01.073 END TEST devices 00:05:01.073 ************************************ 00:05:01.073 22:52:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.073 00:05:01.073 real 0m21.651s 00:05:01.073 user 0m7.115s 00:05:01.073 sys 0m8.795s 00:05:01.073 22:52:13 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.073 22:52:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:01.073 ************************************ 00:05:01.073 END TEST setup.sh 00:05:01.073 ************************************ 00:05:01.073 22:52:13 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:02.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.008 Hugepages 00:05:02.008 node hugesize free / total 00:05:02.008 node0 1048576kB 0 / 0 00:05:02.008 node0 2048kB 2048 / 2048 00:05:02.008 00:05:02.008 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.008 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.008 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:02.008 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:02.008 22:52:14 -- spdk/autotest.sh@130 -- # uname -s 00:05:02.008 22:52:14 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:02.008 22:52:14 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:02.008 22:52:14 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.830 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.830 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.830 22:52:15 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:04.202 22:52:16 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:04.202 22:52:16 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:04.202 22:52:16 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.202 22:52:16 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:04.202 22:52:16 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:04.202 22:52:16 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:04.202 22:52:16 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.202 22:52:16 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:04.202 22:52:16 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:04.202 22:52:16 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:04.202 22:52:16 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:04.202 22:52:16 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.202 Waiting for block devices as requested 00:05:04.460 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.461 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.461 22:52:16 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:04.461 22:52:16 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:04.461 22:52:16 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:05:04.461 22:52:16 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.461 22:52:16 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.461 22:52:16 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:04.461 22:52:16 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.461 22:52:16 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:05:04.461 22:52:16 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:05:04.461 22:52:16 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:05:04.461 22:52:16 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:05:04.461 22:52:16 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:04.461 22:52:16 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:04.461 22:52:16 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:04.461 22:52:16 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:04.461 22:52:16 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:04.461 22:52:16 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:05:04.461 22:52:16 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:04.461 22:52:16 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:04.461 22:52:16 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:04.461 22:52:16 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:04.461 22:52:16 -- common/autotest_common.sh@1553 -- # continue 00:05:04.461 22:52:16 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:04.461 22:52:16 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:04.461 22:52:16 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.461 22:52:16 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:05:04.461 22:52:16 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:04.461 22:52:16 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:04.461 22:52:16 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:04.461 22:52:16 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:04.461 22:52:16 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:04.461 22:52:16 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:04.461 22:52:16 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:04.461 22:52:16 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:04.461 22:52:16 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:04.461 22:52:16 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:04.461 22:52:16 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:04.461 22:52:16 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:04.461 22:52:16 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:04.461 22:52:16 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:04.461 22:52:16 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:04.461 22:52:16 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:04.461 22:52:16 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:04.461 22:52:16 -- common/autotest_common.sh@1553 -- # continue 00:05:04.461 22:52:16 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:04.461 22:52:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.461 22:52:16 -- common/autotest_common.sh@10 -- # set +x 00:05:04.719 22:52:16 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:04.719 22:52:16 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:04.719 22:52:16 -- common/autotest_common.sh@10 -- # set +x 00:05:04.719 22:52:16 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.283 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.283 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.541 22:52:17 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:05.542 22:52:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.542 22:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:05.542 22:52:17 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:05.542 22:52:17 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:05.542 22:52:17 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.542 22:52:17 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:05.542 22:52:17 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:05.542 22:52:17 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:05.542 22:52:17 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:05.542 22:52:17 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:05.542 22:52:17 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.542 22:52:17 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:05.542 22:52:17 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.542 22:52:17 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:05.542 22:52:17 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:05.542 22:52:17 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:05.542 22:52:17 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:05.542 22:52:17 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:05.542 22:52:17 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.542 22:52:17 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:05.542 22:52:17 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:05.542 22:52:17 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:05.542 22:52:17 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.542 22:52:17 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:05.542 22:52:17 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:05.542 22:52:17 -- common/autotest_common.sh@1589 -- # return 0 00:05:05.542 22:52:17 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:05.542 22:52:17 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:05.542 22:52:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:05.542 22:52:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:05.542 22:52:17 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:05.542 22:52:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:05.542 22:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:05.542 22:52:17 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.542 22:52:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.542 22:52:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.542 22:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:05.542 ************************************ 00:05:05.542 START TEST env 00:05:05.542 ************************************ 00:05:05.542 22:52:17 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.542 * Looking for test storage... 00:05:05.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:05.542 22:52:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.542 22:52:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.542 22:52:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.542 22:52:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.542 ************************************ 00:05:05.542 START TEST env_memory 00:05:05.542 ************************************ 00:05:05.542 22:52:17 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.542 00:05:05.542 00:05:05.542 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.542 http://cunit.sourceforge.net/ 00:05:05.542 00:05:05.542 00:05:05.542 Suite: memory 00:05:05.800 Test: alloc and free memory map ...[2024-05-14 22:52:17.963226] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.800 passed 00:05:05.800 Test: mem map translation ...[2024-05-14 22:52:17.994515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.800 [2024-05-14 22:52:17.994580] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.800 [2024-05-14 22:52:17.994636] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.800 [2024-05-14 22:52:17.994647] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.800 passed 00:05:05.800 Test: mem map registration ...[2024-05-14 22:52:18.060971] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:05.800 [2024-05-14 22:52:18.061020] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:05.800 passed 00:05:05.800 Test: mem map adjacent registrations ...passed 00:05:05.800 00:05:05.800 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.800 suites 1 1 n/a 0 0 00:05:05.800 tests 4 4 4 0 0 00:05:05.800 asserts 152 152 152 0 n/a 00:05:05.800 00:05:05.800 Elapsed time = 0.221 seconds 00:05:05.800 00:05:05.800 real 0m0.233s 00:05:05.800 user 0m0.218s 00:05:05.800 sys 0m0.014s 00:05:05.800 22:52:18 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.801 22:52:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:05.801 ************************************ 00:05:05.801 END TEST env_memory 00:05:05.801 ************************************ 00:05:06.059 22:52:18 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.059 22:52:18 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.059 22:52:18 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.059 22:52:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.059 ************************************ 00:05:06.059 START TEST env_vtophys 00:05:06.059 ************************************ 00:05:06.059 22:52:18 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.059 EAL: lib.eal log level changed from notice to debug 00:05:06.059 EAL: Detected lcore 0 as core 0 on socket 0 00:05:06.059 EAL: Detected lcore 1 as core 0 on socket 0 00:05:06.059 EAL: Detected lcore 2 as core 0 on socket 0 00:05:06.059 EAL: Detected lcore 3 as core 0 on socket 0 00:05:06.059 EAL: Detected lcore 4 as core 0 on socket 0 00:05:06.059 EAL: Detected lcore 5 as core 0 on socket 0 00:05:06.059 EAL: Detected lcore 6 as core 0 on socket 0 00:05:06.059 EAL: Detected lcore 7 as core 0 on socket 0 00:05:06.059 EAL: Detected lcore 8 as core 0 on socket 0 00:05:06.059 EAL: Detected lcore 9 as core 0 on socket 0 00:05:06.059 EAL: Maximum logical cores by configuration: 128 00:05:06.059 EAL: Detected CPU lcores: 10 00:05:06.059 EAL: Detected NUMA nodes: 1 00:05:06.059 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:06.059 EAL: Detected shared linkage of DPDK 00:05:06.059 EAL: No shared files mode enabled, IPC will be disabled 00:05:06.059 EAL: Selected IOVA mode 'PA' 00:05:06.059 EAL: Probing VFIO support... 00:05:06.059 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.059 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:06.059 EAL: Ask a virtual area of 0x2e000 bytes 00:05:06.059 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:06.059 EAL: Setting up physically contiguous memory... 00:05:06.059 EAL: Setting maximum number of open files to 524288 00:05:06.059 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.059 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.059 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.059 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.059 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.059 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.059 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.059 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.059 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.059 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.059 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.059 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.060 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.060 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.060 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.060 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.060 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.060 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.060 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.060 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.060 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.060 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.060 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.060 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.060 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.060 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.060 EAL: Hugepages will be freed exactly as allocated. 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: TSC frequency is ~2200000 KHz 00:05:06.060 EAL: Main lcore 0 is ready (tid=7f937ca62a00;cpuset=[0]) 00:05:06.060 EAL: Trying to obtain current memory policy. 00:05:06.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.060 EAL: Restoring previous memory policy: 0 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.060 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.060 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.060 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.060 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:06.060 00:05:06.060 00:05:06.060 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.060 http://cunit.sourceforge.net/ 00:05:06.060 00:05:06.060 00:05:06.060 Suite: components_suite 00:05:06.060 Test: vtophys_malloc_test ...passed 00:05:06.060 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.060 EAL: Restoring previous memory policy: 4 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.060 EAL: Trying to obtain current memory policy. 00:05:06.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.060 EAL: Restoring previous memory policy: 4 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.060 EAL: Trying to obtain current memory policy. 00:05:06.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.060 EAL: Restoring previous memory policy: 4 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.060 EAL: Trying to obtain current memory policy. 00:05:06.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.060 EAL: Restoring previous memory policy: 4 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.060 EAL: Trying to obtain current memory policy. 00:05:06.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.060 EAL: Restoring previous memory policy: 4 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.060 EAL: Trying to obtain current memory policy. 00:05:06.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.060 EAL: Restoring previous memory policy: 4 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was shrunk by 66MB 00:05:06.060 EAL: Trying to obtain current memory policy. 00:05:06.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.060 EAL: Restoring previous memory policy: 4 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.060 EAL: request: mp_malloc_sync 00:05:06.060 EAL: No shared files mode enabled, IPC is disabled 00:05:06.060 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.319 EAL: request: mp_malloc_sync 00:05:06.319 EAL: No shared files mode enabled, IPC is disabled 00:05:06.319 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.319 EAL: Trying to obtain current memory policy. 00:05:06.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.319 EAL: Restoring previous memory policy: 4 00:05:06.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.319 EAL: request: mp_malloc_sync 00:05:06.319 EAL: No shared files mode enabled, IPC is disabled 00:05:06.319 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.319 EAL: request: mp_malloc_sync 00:05:06.319 EAL: No shared files mode enabled, IPC is disabled 00:05:06.319 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.319 EAL: Trying to obtain current memory policy. 00:05:06.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.319 EAL: Restoring previous memory policy: 4 00:05:06.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.319 EAL: request: mp_malloc_sync 00:05:06.319 EAL: No shared files mode enabled, IPC is disabled 00:05:06.319 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.577 EAL: request: mp_malloc_sync 00:05:06.577 EAL: No shared files mode enabled, IPC is disabled 00:05:06.577 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.577 EAL: Trying to obtain current memory policy. 00:05:06.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.577 EAL: Restoring previous memory policy: 4 00:05:06.577 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.577 EAL: request: mp_malloc_sync 00:05:06.577 EAL: No shared files mode enabled, IPC is disabled 00:05:06.577 EAL: Heap on socket 0 was expanded by 1026MB 00:05:06.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.836 passed 00:05:06.836 00:05:06.836 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.836 suites 1 1 n/a 0 0 00:05:06.836 tests 2 2 2 0 0 00:05:06.836 asserts 5358 5358 5358 0 n/a 00:05:06.836 00:05:06.836 Elapsed time = 0.704 seconds 00:05:06.836 EAL: request: mp_malloc_sync 00:05:06.836 EAL: No shared files mode enabled, IPC is disabled 00:05:06.836 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.836 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.836 EAL: request: mp_malloc_sync 00:05:06.836 EAL: No shared files mode enabled, IPC is disabled 00:05:06.836 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.836 EAL: No shared files mode enabled, IPC is disabled 00:05:06.836 EAL: No shared files mode enabled, IPC is disabled 00:05:06.836 EAL: No shared files mode enabled, IPC is disabled 00:05:06.836 00:05:06.836 real 0m0.904s 00:05:06.836 user 0m0.456s 00:05:06.836 sys 0m0.318s 00:05:06.836 22:52:19 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.836 22:52:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:06.836 ************************************ 00:05:06.836 END TEST env_vtophys 00:05:06.836 ************************************ 00:05:06.836 22:52:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:06.836 22:52:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.836 22:52:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.836 22:52:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.836 ************************************ 00:05:06.836 START TEST env_pci 00:05:06.836 ************************************ 00:05:06.836 22:52:19 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:06.836 00:05:06.836 00:05:06.836 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.836 http://cunit.sourceforge.net/ 00:05:06.836 00:05:06.836 00:05:06.836 Suite: pci 00:05:06.836 Test: pci_hook ...[2024-05-14 22:52:19.173998] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59957 has claimed it 00:05:06.836 passed 00:05:06.836 00:05:06.836 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.836 suites 1 1 n/a 0 0 00:05:06.836 tests 1 1 1 0 0 00:05:06.836 asserts 25 25 25 0 n/a 00:05:06.836 00:05:06.836 Elapsed time = 0.003 seconds 00:05:06.836 EAL: Cannot find device (10000:00:01.0) 00:05:06.836 EAL: Failed to attach device on primary process 00:05:06.836 00:05:06.836 real 0m0.025s 00:05:06.836 user 0m0.011s 00:05:06.836 sys 0m0.014s 00:05:06.836 22:52:19 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.836 22:52:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:06.836 ************************************ 00:05:06.836 END TEST env_pci 00:05:06.836 ************************************ 00:05:06.836 22:52:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:06.836 22:52:19 env -- env/env.sh@15 -- # uname 00:05:07.095 22:52:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:07.095 22:52:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:07.095 22:52:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.095 22:52:19 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:07.095 22:52:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.095 22:52:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 ************************************ 00:05:07.095 START TEST env_dpdk_post_init 00:05:07.095 ************************************ 00:05:07.095 22:52:19 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.095 EAL: Detected CPU lcores: 10 00:05:07.095 EAL: Detected NUMA nodes: 1 00:05:07.095 EAL: Detected shared linkage of DPDK 00:05:07.095 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.095 EAL: Selected IOVA mode 'PA' 00:05:07.095 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.095 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:07.095 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:07.095 Starting DPDK initialization... 00:05:07.095 Starting SPDK post initialization... 00:05:07.095 SPDK NVMe probe 00:05:07.095 Attaching to 0000:00:10.0 00:05:07.095 Attaching to 0000:00:11.0 00:05:07.095 Attached to 0000:00:10.0 00:05:07.095 Attached to 0000:00:11.0 00:05:07.095 Cleaning up... 00:05:07.095 00:05:07.095 real 0m0.183s 00:05:07.095 user 0m0.045s 00:05:07.095 sys 0m0.038s 00:05:07.095 22:52:19 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.095 22:52:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 ************************************ 00:05:07.095 END TEST env_dpdk_post_init 00:05:07.095 ************************************ 00:05:07.095 22:52:19 env -- env/env.sh@26 -- # uname 00:05:07.095 22:52:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:07.095 22:52:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.095 22:52:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.095 22:52:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.095 22:52:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 ************************************ 00:05:07.095 START TEST env_mem_callbacks 00:05:07.095 ************************************ 00:05:07.095 22:52:19 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.354 EAL: Detected CPU lcores: 10 00:05:07.354 EAL: Detected NUMA nodes: 1 00:05:07.354 EAL: Detected shared linkage of DPDK 00:05:07.354 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.354 EAL: Selected IOVA mode 'PA' 00:05:07.354 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.354 00:05:07.354 00:05:07.354 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.354 http://cunit.sourceforge.net/ 00:05:07.354 00:05:07.354 00:05:07.354 Suite: memory 00:05:07.354 Test: test ... 00:05:07.354 register 0x200000200000 2097152 00:05:07.354 malloc 3145728 00:05:07.354 register 0x200000400000 4194304 00:05:07.354 buf 0x200000500000 len 3145728 PASSED 00:05:07.354 malloc 64 00:05:07.354 buf 0x2000004fff40 len 64 PASSED 00:05:07.354 malloc 4194304 00:05:07.354 register 0x200000800000 6291456 00:05:07.354 buf 0x200000a00000 len 4194304 PASSED 00:05:07.354 free 0x200000500000 3145728 00:05:07.354 free 0x2000004fff40 64 00:05:07.354 unregister 0x200000400000 4194304 PASSED 00:05:07.354 free 0x200000a00000 4194304 00:05:07.354 unregister 0x200000800000 6291456 PASSED 00:05:07.354 malloc 8388608 00:05:07.354 register 0x200000400000 10485760 00:05:07.354 buf 0x200000600000 len 8388608 PASSED 00:05:07.354 free 0x200000600000 8388608 00:05:07.354 unregister 0x200000400000 10485760 PASSED 00:05:07.354 passed 00:05:07.354 00:05:07.354 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.354 suites 1 1 n/a 0 0 00:05:07.354 tests 1 1 1 0 0 00:05:07.354 asserts 15 15 15 0 n/a 00:05:07.354 00:05:07.354 Elapsed time = 0.006 seconds 00:05:07.354 00:05:07.354 real 0m0.141s 00:05:07.354 user 0m0.017s 00:05:07.354 sys 0m0.022s 00:05:07.354 22:52:19 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.354 22:52:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:07.354 ************************************ 00:05:07.354 END TEST env_mem_callbacks 00:05:07.354 ************************************ 00:05:07.354 00:05:07.354 real 0m1.823s 00:05:07.354 user 0m0.871s 00:05:07.354 sys 0m0.611s 00:05:07.354 22:52:19 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.354 22:52:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.354 ************************************ 00:05:07.354 END TEST env 00:05:07.354 ************************************ 00:05:07.354 22:52:19 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:07.354 22:52:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.354 22:52:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.354 22:52:19 -- common/autotest_common.sh@10 -- # set +x 00:05:07.354 ************************************ 00:05:07.354 START TEST rpc 00:05:07.354 ************************************ 00:05:07.354 22:52:19 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:07.612 * Looking for test storage... 00:05:07.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.612 22:52:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60067 00:05:07.612 22:52:19 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:07.612 22:52:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.612 22:52:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60067 00:05:07.612 22:52:19 rpc -- common/autotest_common.sh@827 -- # '[' -z 60067 ']' 00:05:07.612 22:52:19 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.612 22:52:19 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:07.612 22:52:19 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.612 22:52:19 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:07.612 22:52:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.612 [2024-05-14 22:52:19.850672] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:07.612 [2024-05-14 22:52:19.850801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60067 ] 00:05:07.612 [2024-05-14 22:52:19.987157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.869 [2024-05-14 22:52:20.057933] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:07.869 [2024-05-14 22:52:20.057997] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60067' to capture a snapshot of events at runtime. 00:05:07.869 [2024-05-14 22:52:20.058013] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:07.869 [2024-05-14 22:52:20.058023] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:07.869 [2024-05-14 22:52:20.058032] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60067 for offline analysis/debug. 00:05:07.869 [2024-05-14 22:52:20.058062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.869 22:52:20 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:07.869 22:52:20 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:07.869 22:52:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.869 22:52:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.869 22:52:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:07.869 22:52:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:07.869 22:52:20 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.869 22:52:20 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.869 22:52:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.869 ************************************ 00:05:07.869 START TEST rpc_integrity 00:05:07.869 ************************************ 00:05:07.869 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:07.869 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.869 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.869 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:08.128 { 00:05:08.128 "aliases": [ 00:05:08.128 "9f79646e-6b89-401c-92dd-2e012aaafec3" 00:05:08.128 ], 00:05:08.128 "assigned_rate_limits": { 00:05:08.128 "r_mbytes_per_sec": 0, 00:05:08.128 "rw_ios_per_sec": 0, 00:05:08.128 "rw_mbytes_per_sec": 0, 00:05:08.128 "w_mbytes_per_sec": 0 00:05:08.128 }, 00:05:08.128 "block_size": 512, 00:05:08.128 "claimed": false, 00:05:08.128 "driver_specific": {}, 00:05:08.128 "memory_domains": [ 00:05:08.128 { 00:05:08.128 "dma_device_id": "system", 00:05:08.128 "dma_device_type": 1 00:05:08.128 }, 00:05:08.128 { 00:05:08.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.128 "dma_device_type": 2 00:05:08.128 } 00:05:08.128 ], 00:05:08.128 "name": "Malloc0", 00:05:08.128 "num_blocks": 16384, 00:05:08.128 "product_name": "Malloc disk", 00:05:08.128 "supported_io_types": { 00:05:08.128 "abort": true, 00:05:08.128 "compare": false, 00:05:08.128 "compare_and_write": false, 00:05:08.128 "flush": true, 00:05:08.128 "nvme_admin": false, 00:05:08.128 "nvme_io": false, 00:05:08.128 "read": true, 00:05:08.128 "reset": true, 00:05:08.128 "unmap": true, 00:05:08.128 "write": true, 00:05:08.128 "write_zeroes": true 00:05:08.128 }, 00:05:08.128 "uuid": "9f79646e-6b89-401c-92dd-2e012aaafec3", 00:05:08.128 "zoned": false 00:05:08.128 } 00:05:08.128 ]' 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.128 [2024-05-14 22:52:20.420569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:08.128 [2024-05-14 22:52:20.420636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:08.128 [2024-05-14 22:52:20.420661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f39da0 00:05:08.128 [2024-05-14 22:52:20.420672] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:08.128 [2024-05-14 22:52:20.422315] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:08.128 [2024-05-14 22:52:20.422353] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:08.128 Passthru0 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:08.128 { 00:05:08.128 "aliases": [ 00:05:08.128 "9f79646e-6b89-401c-92dd-2e012aaafec3" 00:05:08.128 ], 00:05:08.128 "assigned_rate_limits": { 00:05:08.128 "r_mbytes_per_sec": 0, 00:05:08.128 "rw_ios_per_sec": 0, 00:05:08.128 "rw_mbytes_per_sec": 0, 00:05:08.128 "w_mbytes_per_sec": 0 00:05:08.128 }, 00:05:08.128 "block_size": 512, 00:05:08.128 "claim_type": "exclusive_write", 00:05:08.128 "claimed": true, 00:05:08.128 "driver_specific": {}, 00:05:08.128 "memory_domains": [ 00:05:08.128 { 00:05:08.128 "dma_device_id": "system", 00:05:08.128 "dma_device_type": 1 00:05:08.128 }, 00:05:08.128 { 00:05:08.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.128 "dma_device_type": 2 00:05:08.128 } 00:05:08.128 ], 00:05:08.128 "name": "Malloc0", 00:05:08.128 "num_blocks": 16384, 00:05:08.128 "product_name": "Malloc disk", 00:05:08.128 "supported_io_types": { 00:05:08.128 "abort": true, 00:05:08.128 "compare": false, 00:05:08.128 "compare_and_write": false, 00:05:08.128 "flush": true, 00:05:08.128 "nvme_admin": false, 00:05:08.128 "nvme_io": false, 00:05:08.128 "read": true, 00:05:08.128 "reset": true, 00:05:08.128 "unmap": true, 00:05:08.128 "write": true, 00:05:08.128 "write_zeroes": true 00:05:08.128 }, 00:05:08.128 "uuid": "9f79646e-6b89-401c-92dd-2e012aaafec3", 00:05:08.128 "zoned": false 00:05:08.128 }, 00:05:08.128 { 00:05:08.128 "aliases": [ 00:05:08.128 "09f3f57d-c1b4-5e1a-9a77-c3e4bdf7d4e0" 00:05:08.128 ], 00:05:08.128 "assigned_rate_limits": { 00:05:08.128 "r_mbytes_per_sec": 0, 00:05:08.128 "rw_ios_per_sec": 0, 00:05:08.128 "rw_mbytes_per_sec": 0, 00:05:08.128 "w_mbytes_per_sec": 0 00:05:08.128 }, 00:05:08.128 "block_size": 512, 00:05:08.128 "claimed": false, 00:05:08.128 "driver_specific": { 00:05:08.128 "passthru": { 00:05:08.128 "base_bdev_name": "Malloc0", 00:05:08.128 "name": "Passthru0" 00:05:08.128 } 00:05:08.128 }, 00:05:08.128 "memory_domains": [ 00:05:08.128 { 00:05:08.128 "dma_device_id": "system", 00:05:08.128 "dma_device_type": 1 00:05:08.128 }, 00:05:08.128 { 00:05:08.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.128 "dma_device_type": 2 00:05:08.128 } 00:05:08.128 ], 00:05:08.128 "name": "Passthru0", 00:05:08.128 "num_blocks": 16384, 00:05:08.128 "product_name": "passthru", 00:05:08.128 "supported_io_types": { 00:05:08.128 "abort": true, 00:05:08.128 "compare": false, 00:05:08.128 "compare_and_write": false, 00:05:08.128 "flush": true, 00:05:08.128 "nvme_admin": false, 00:05:08.128 "nvme_io": false, 00:05:08.128 "read": true, 00:05:08.128 "reset": true, 00:05:08.128 "unmap": true, 00:05:08.128 "write": true, 00:05:08.128 "write_zeroes": true 00:05:08.128 }, 00:05:08.128 "uuid": "09f3f57d-c1b4-5e1a-9a77-c3e4bdf7d4e0", 00:05:08.128 "zoned": false 00:05:08.128 } 00:05:08.128 ]' 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.128 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.128 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:08.129 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.129 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.129 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.129 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:08.129 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.129 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.387 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:08.387 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:08.387 ************************************ 00:05:08.387 END TEST rpc_integrity 00:05:08.387 ************************************ 00:05:08.387 22:52:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:08.387 00:05:08.387 real 0m0.335s 00:05:08.387 user 0m0.227s 00:05:08.387 sys 0m0.032s 00:05:08.387 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.387 22:52:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 22:52:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:08.387 22:52:20 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.387 22:52:20 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.387 22:52:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 ************************************ 00:05:08.387 START TEST rpc_plugins 00:05:08.387 ************************************ 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:08.387 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.387 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:08.387 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.387 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:08.387 { 00:05:08.387 "aliases": [ 00:05:08.387 "96c414f8-4d76-4fff-8aa4-b9b476ac9972" 00:05:08.387 ], 00:05:08.387 "assigned_rate_limits": { 00:05:08.387 "r_mbytes_per_sec": 0, 00:05:08.387 "rw_ios_per_sec": 0, 00:05:08.387 "rw_mbytes_per_sec": 0, 00:05:08.387 "w_mbytes_per_sec": 0 00:05:08.387 }, 00:05:08.387 "block_size": 4096, 00:05:08.387 "claimed": false, 00:05:08.387 "driver_specific": {}, 00:05:08.387 "memory_domains": [ 00:05:08.387 { 00:05:08.387 "dma_device_id": "system", 00:05:08.387 "dma_device_type": 1 00:05:08.387 }, 00:05:08.387 { 00:05:08.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.387 "dma_device_type": 2 00:05:08.387 } 00:05:08.387 ], 00:05:08.387 "name": "Malloc1", 00:05:08.387 "num_blocks": 256, 00:05:08.387 "product_name": "Malloc disk", 00:05:08.387 "supported_io_types": { 00:05:08.387 "abort": true, 00:05:08.387 "compare": false, 00:05:08.387 "compare_and_write": false, 00:05:08.387 "flush": true, 00:05:08.387 "nvme_admin": false, 00:05:08.387 "nvme_io": false, 00:05:08.387 "read": true, 00:05:08.387 "reset": true, 00:05:08.387 "unmap": true, 00:05:08.387 "write": true, 00:05:08.387 "write_zeroes": true 00:05:08.387 }, 00:05:08.387 "uuid": "96c414f8-4d76-4fff-8aa4-b9b476ac9972", 00:05:08.387 "zoned": false 00:05:08.387 } 00:05:08.387 ]' 00:05:08.387 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:08.387 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:08.387 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.387 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.387 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:08.387 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:08.645 ************************************ 00:05:08.645 END TEST rpc_plugins 00:05:08.645 ************************************ 00:05:08.645 22:52:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:08.645 00:05:08.645 real 0m0.175s 00:05:08.645 user 0m0.122s 00:05:08.645 sys 0m0.018s 00:05:08.645 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.645 22:52:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.645 22:52:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:08.645 22:52:20 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.645 22:52:20 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.645 22:52:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.645 ************************************ 00:05:08.645 START TEST rpc_trace_cmd_test 00:05:08.645 ************************************ 00:05:08.645 22:52:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:08.645 22:52:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:08.645 22:52:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:08.645 22:52:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.645 22:52:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.645 22:52:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.645 22:52:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:08.645 "bdev": { 00:05:08.645 "mask": "0x8", 00:05:08.645 "tpoint_mask": "0xffffffffffffffff" 00:05:08.645 }, 00:05:08.645 "bdev_nvme": { 00:05:08.645 "mask": "0x4000", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "blobfs": { 00:05:08.645 "mask": "0x80", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "dsa": { 00:05:08.645 "mask": "0x200", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "ftl": { 00:05:08.645 "mask": "0x40", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "iaa": { 00:05:08.645 "mask": "0x1000", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "iscsi_conn": { 00:05:08.645 "mask": "0x2", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "nvme_pcie": { 00:05:08.645 "mask": "0x800", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "nvme_tcp": { 00:05:08.645 "mask": "0x2000", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "nvmf_rdma": { 00:05:08.645 "mask": "0x10", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "nvmf_tcp": { 00:05:08.645 "mask": "0x20", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "scsi": { 00:05:08.645 "mask": "0x4", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "sock": { 00:05:08.645 "mask": "0x8000", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "thread": { 00:05:08.645 "mask": "0x400", 00:05:08.645 "tpoint_mask": "0x0" 00:05:08.645 }, 00:05:08.645 "tpoint_group_mask": "0x8", 00:05:08.645 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60067" 00:05:08.645 }' 00:05:08.645 22:52:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:08.645 22:52:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:08.645 22:52:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:08.645 22:52:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:08.645 22:52:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:08.903 22:52:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:08.903 22:52:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:08.903 22:52:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:08.903 22:52:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:08.903 22:52:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:08.903 00:05:08.903 real 0m0.308s 00:05:08.903 user 0m0.274s 00:05:08.903 sys 0m0.022s 00:05:08.903 22:52:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.903 ************************************ 00:05:08.903 END TEST rpc_trace_cmd_test 00:05:08.903 22:52:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.903 ************************************ 00:05:08.903 22:52:21 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:08.903 22:52:21 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:08.903 22:52:21 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.903 22:52:21 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.903 22:52:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.903 ************************************ 00:05:08.903 START TEST go_rpc 00:05:08.903 ************************************ 00:05:08.903 22:52:21 rpc.go_rpc -- common/autotest_common.sh@1121 -- # go_rpc 00:05:08.903 22:52:21 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:08.903 22:52:21 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:08.903 22:52:21 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:08.903 22:52:21 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:08.903 22:52:21 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.903 22:52:21 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.903 22:52:21 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.903 22:52:21 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.903 22:52:21 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:09.161 22:52:21 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:09.161 22:52:21 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["632429d7-2e4b-43d5-ac41-abf3e785d249"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"632429d7-2e4b-43d5-ac41-abf3e785d249","zoned":false}]' 00:05:09.161 22:52:21 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:09.161 22:52:21 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:09.161 22:52:21 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:09.161 22:52:21 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.161 22:52:21 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.161 22:52:21 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.161 22:52:21 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:09.161 22:52:21 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:09.161 22:52:21 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:09.161 22:52:21 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:09.161 00:05:09.161 real 0m0.215s 00:05:09.161 user 0m0.147s 00:05:09.161 sys 0m0.032s 00:05:09.161 22:52:21 rpc.go_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.161 22:52:21 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.161 ************************************ 00:05:09.161 END TEST go_rpc 00:05:09.161 ************************************ 00:05:09.161 22:52:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:09.161 22:52:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:09.161 22:52:21 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.161 22:52:21 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.161 22:52:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.161 ************************************ 00:05:09.161 START TEST rpc_daemon_integrity 00:05:09.161 ************************************ 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.161 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.503 { 00:05:09.503 "aliases": [ 00:05:09.503 "c360dcd6-267e-4b56-92be-9cafa6454e41" 00:05:09.503 ], 00:05:09.503 "assigned_rate_limits": { 00:05:09.503 "r_mbytes_per_sec": 0, 00:05:09.503 "rw_ios_per_sec": 0, 00:05:09.503 "rw_mbytes_per_sec": 0, 00:05:09.503 "w_mbytes_per_sec": 0 00:05:09.503 }, 00:05:09.503 "block_size": 512, 00:05:09.503 "claimed": false, 00:05:09.503 "driver_specific": {}, 00:05:09.503 "memory_domains": [ 00:05:09.503 { 00:05:09.503 "dma_device_id": "system", 00:05:09.503 "dma_device_type": 1 00:05:09.503 }, 00:05:09.503 { 00:05:09.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.503 "dma_device_type": 2 00:05:09.503 } 00:05:09.503 ], 00:05:09.503 "name": "Malloc3", 00:05:09.503 "num_blocks": 16384, 00:05:09.503 "product_name": "Malloc disk", 00:05:09.503 "supported_io_types": { 00:05:09.503 "abort": true, 00:05:09.503 "compare": false, 00:05:09.503 "compare_and_write": false, 00:05:09.503 "flush": true, 00:05:09.503 "nvme_admin": false, 00:05:09.503 "nvme_io": false, 00:05:09.503 "read": true, 00:05:09.503 "reset": true, 00:05:09.503 "unmap": true, 00:05:09.503 "write": true, 00:05:09.503 "write_zeroes": true 00:05:09.503 }, 00:05:09.503 "uuid": "c360dcd6-267e-4b56-92be-9cafa6454e41", 00:05:09.503 "zoned": false 00:05:09.503 } 00:05:09.503 ]' 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.503 [2024-05-14 22:52:21.633485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:09.503 [2024-05-14 22:52:21.633546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.503 [2024-05-14 22:52:21.633574] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f8bfa0 00:05:09.503 [2024-05-14 22:52:21.633588] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.503 [2024-05-14 22:52:21.635317] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.503 [2024-05-14 22:52:21.635381] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.503 Passthru0 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.503 { 00:05:09.503 "aliases": [ 00:05:09.503 "c360dcd6-267e-4b56-92be-9cafa6454e41" 00:05:09.503 ], 00:05:09.503 "assigned_rate_limits": { 00:05:09.503 "r_mbytes_per_sec": 0, 00:05:09.503 "rw_ios_per_sec": 0, 00:05:09.503 "rw_mbytes_per_sec": 0, 00:05:09.503 "w_mbytes_per_sec": 0 00:05:09.503 }, 00:05:09.503 "block_size": 512, 00:05:09.503 "claim_type": "exclusive_write", 00:05:09.503 "claimed": true, 00:05:09.503 "driver_specific": {}, 00:05:09.503 "memory_domains": [ 00:05:09.503 { 00:05:09.503 "dma_device_id": "system", 00:05:09.503 "dma_device_type": 1 00:05:09.503 }, 00:05:09.503 { 00:05:09.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.503 "dma_device_type": 2 00:05:09.503 } 00:05:09.503 ], 00:05:09.503 "name": "Malloc3", 00:05:09.503 "num_blocks": 16384, 00:05:09.503 "product_name": "Malloc disk", 00:05:09.503 "supported_io_types": { 00:05:09.503 "abort": true, 00:05:09.503 "compare": false, 00:05:09.503 "compare_and_write": false, 00:05:09.503 "flush": true, 00:05:09.503 "nvme_admin": false, 00:05:09.503 "nvme_io": false, 00:05:09.503 "read": true, 00:05:09.503 "reset": true, 00:05:09.503 "unmap": true, 00:05:09.503 "write": true, 00:05:09.503 "write_zeroes": true 00:05:09.503 }, 00:05:09.503 "uuid": "c360dcd6-267e-4b56-92be-9cafa6454e41", 00:05:09.503 "zoned": false 00:05:09.503 }, 00:05:09.503 { 00:05:09.503 "aliases": [ 00:05:09.503 "45f2dfaa-c907-5f77-9185-8761c9275d44" 00:05:09.503 ], 00:05:09.503 "assigned_rate_limits": { 00:05:09.503 "r_mbytes_per_sec": 0, 00:05:09.503 "rw_ios_per_sec": 0, 00:05:09.503 "rw_mbytes_per_sec": 0, 00:05:09.503 "w_mbytes_per_sec": 0 00:05:09.503 }, 00:05:09.503 "block_size": 512, 00:05:09.503 "claimed": false, 00:05:09.503 "driver_specific": { 00:05:09.503 "passthru": { 00:05:09.503 "base_bdev_name": "Malloc3", 00:05:09.503 "name": "Passthru0" 00:05:09.503 } 00:05:09.503 }, 00:05:09.503 "memory_domains": [ 00:05:09.503 { 00:05:09.503 "dma_device_id": "system", 00:05:09.503 "dma_device_type": 1 00:05:09.503 }, 00:05:09.503 { 00:05:09.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.503 "dma_device_type": 2 00:05:09.503 } 00:05:09.503 ], 00:05:09.503 "name": "Passthru0", 00:05:09.503 "num_blocks": 16384, 00:05:09.503 "product_name": "passthru", 00:05:09.503 "supported_io_types": { 00:05:09.503 "abort": true, 00:05:09.503 "compare": false, 00:05:09.503 "compare_and_write": false, 00:05:09.503 "flush": true, 00:05:09.503 "nvme_admin": false, 00:05:09.503 "nvme_io": false, 00:05:09.503 "read": true, 00:05:09.503 "reset": true, 00:05:09.503 "unmap": true, 00:05:09.503 "write": true, 00:05:09.503 "write_zeroes": true 00:05:09.503 }, 00:05:09.503 "uuid": "45f2dfaa-c907-5f77-9185-8761c9275d44", 00:05:09.503 "zoned": false 00:05:09.503 } 00:05:09.503 ]' 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.503 00:05:09.503 real 0m0.330s 00:05:09.503 user 0m0.218s 00:05:09.503 sys 0m0.042s 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.503 22:52:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.503 ************************************ 00:05:09.503 END TEST rpc_daemon_integrity 00:05:09.503 ************************************ 00:05:09.503 22:52:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:09.503 22:52:21 rpc -- rpc/rpc.sh@84 -- # killprocess 60067 00:05:09.503 22:52:21 rpc -- common/autotest_common.sh@946 -- # '[' -z 60067 ']' 00:05:09.503 22:52:21 rpc -- common/autotest_common.sh@950 -- # kill -0 60067 00:05:09.503 22:52:21 rpc -- common/autotest_common.sh@951 -- # uname 00:05:09.503 22:52:21 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:09.503 22:52:21 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60067 00:05:09.503 22:52:21 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:09.503 22:52:21 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:09.503 killing process with pid 60067 00:05:09.503 22:52:21 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60067' 00:05:09.503 22:52:21 rpc -- common/autotest_common.sh@965 -- # kill 60067 00:05:09.503 22:52:21 rpc -- common/autotest_common.sh@970 -- # wait 60067 00:05:10.073 00:05:10.073 real 0m2.475s 00:05:10.073 user 0m3.423s 00:05:10.073 sys 0m0.611s 00:05:10.073 22:52:22 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.073 22:52:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.073 ************************************ 00:05:10.073 END TEST rpc 00:05:10.073 ************************************ 00:05:10.073 22:52:22 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:10.073 22:52:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.073 22:52:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.073 22:52:22 -- common/autotest_common.sh@10 -- # set +x 00:05:10.073 ************************************ 00:05:10.073 START TEST skip_rpc 00:05:10.073 ************************************ 00:05:10.073 22:52:22 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:10.073 * Looking for test storage... 00:05:10.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.073 22:52:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:10.073 22:52:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:10.073 22:52:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:10.073 22:52:22 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.073 22:52:22 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.073 22:52:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.073 ************************************ 00:05:10.073 START TEST skip_rpc 00:05:10.073 ************************************ 00:05:10.073 22:52:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:10.073 22:52:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60314 00:05:10.073 22:52:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:10.073 22:52:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.073 22:52:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:10.073 [2024-05-14 22:52:22.367058] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:10.073 [2024-05-14 22:52:22.367142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60314 ] 00:05:10.332 [2024-05-14 22:52:22.503296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.332 [2024-05-14 22:52:22.577940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.613 2024/05/14 22:52:27 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60314 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 60314 ']' 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 60314 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60314 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60314' 00:05:15.613 killing process with pid 60314 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 60314 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 60314 00:05:15.613 00:05:15.613 real 0m5.333s 00:05:15.613 user 0m5.040s 00:05:15.613 sys 0m0.184s 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.613 22:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.613 ************************************ 00:05:15.613 END TEST skip_rpc 00:05:15.613 ************************************ 00:05:15.613 22:52:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:15.613 22:52:27 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.613 22:52:27 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.613 22:52:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.613 ************************************ 00:05:15.613 START TEST skip_rpc_with_json 00:05:15.613 ************************************ 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60407 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60407 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 60407 ']' 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:15.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:15.613 22:52:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.613 [2024-05-14 22:52:27.757949] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:15.613 [2024-05-14 22:52:27.758069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60407 ] 00:05:15.613 [2024-05-14 22:52:27.895510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.613 [2024-05-14 22:52:27.969263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.628 [2024-05-14 22:52:28.751648] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:16.628 2024/05/14 22:52:28 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:16.628 request: 00:05:16.628 { 00:05:16.628 "method": "nvmf_get_transports", 00:05:16.628 "params": { 00:05:16.628 "trtype": "tcp" 00:05:16.628 } 00:05:16.628 } 00:05:16.628 Got JSON-RPC error response 00:05:16.628 GoRPCClient: error on JSON-RPC call 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.628 [2024-05-14 22:52:28.763715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.628 22:52:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.628 { 00:05:16.628 "subsystems": [ 00:05:16.628 { 00:05:16.628 "subsystem": "keyring", 00:05:16.628 "config": [] 00:05:16.628 }, 00:05:16.628 { 00:05:16.628 "subsystem": "iobuf", 00:05:16.628 "config": [ 00:05:16.628 { 00:05:16.628 "method": "iobuf_set_options", 00:05:16.628 "params": { 00:05:16.628 "large_bufsize": 135168, 00:05:16.628 "large_pool_count": 1024, 00:05:16.628 "small_bufsize": 8192, 00:05:16.628 "small_pool_count": 8192 00:05:16.628 } 00:05:16.628 } 00:05:16.628 ] 00:05:16.628 }, 00:05:16.628 { 00:05:16.628 "subsystem": "sock", 00:05:16.628 "config": [ 00:05:16.628 { 00:05:16.628 "method": "sock_impl_set_options", 00:05:16.628 "params": { 00:05:16.628 "enable_ktls": false, 00:05:16.628 "enable_placement_id": 0, 00:05:16.628 "enable_quickack": false, 00:05:16.628 "enable_recv_pipe": true, 00:05:16.628 "enable_zerocopy_send_client": false, 00:05:16.628 "enable_zerocopy_send_server": true, 00:05:16.628 "impl_name": "posix", 00:05:16.628 "recv_buf_size": 2097152, 00:05:16.628 "send_buf_size": 2097152, 00:05:16.628 "tls_version": 0, 00:05:16.628 "zerocopy_threshold": 0 00:05:16.628 } 00:05:16.628 }, 00:05:16.628 { 00:05:16.628 "method": "sock_impl_set_options", 00:05:16.628 "params": { 00:05:16.628 "enable_ktls": false, 00:05:16.628 "enable_placement_id": 0, 00:05:16.628 "enable_quickack": false, 00:05:16.628 "enable_recv_pipe": true, 00:05:16.628 "enable_zerocopy_send_client": false, 00:05:16.628 "enable_zerocopy_send_server": true, 00:05:16.628 "impl_name": "ssl", 00:05:16.628 "recv_buf_size": 4096, 00:05:16.628 "send_buf_size": 4096, 00:05:16.628 "tls_version": 0, 00:05:16.628 "zerocopy_threshold": 0 00:05:16.628 } 00:05:16.628 } 00:05:16.628 ] 00:05:16.628 }, 00:05:16.628 { 00:05:16.628 "subsystem": "vmd", 00:05:16.628 "config": [] 00:05:16.628 }, 00:05:16.628 { 00:05:16.628 "subsystem": "accel", 00:05:16.628 "config": [ 00:05:16.628 { 00:05:16.628 "method": "accel_set_options", 00:05:16.628 "params": { 00:05:16.628 "buf_count": 2048, 00:05:16.628 "large_cache_size": 16, 00:05:16.628 "sequence_count": 2048, 00:05:16.628 "small_cache_size": 128, 00:05:16.628 "task_count": 2048 00:05:16.628 } 00:05:16.628 } 00:05:16.628 ] 00:05:16.628 }, 00:05:16.628 { 00:05:16.628 "subsystem": "bdev", 00:05:16.628 "config": [ 00:05:16.628 { 00:05:16.628 "method": "bdev_set_options", 00:05:16.628 "params": { 00:05:16.628 "bdev_auto_examine": true, 00:05:16.628 "bdev_io_cache_size": 256, 00:05:16.628 "bdev_io_pool_size": 65535, 00:05:16.628 "iobuf_large_cache_size": 16, 00:05:16.628 "iobuf_small_cache_size": 128 00:05:16.628 } 00:05:16.628 }, 00:05:16.628 { 00:05:16.628 "method": "bdev_raid_set_options", 00:05:16.628 "params": { 00:05:16.628 "process_window_size_kb": 1024 00:05:16.628 } 00:05:16.628 }, 00:05:16.628 { 00:05:16.628 "method": "bdev_iscsi_set_options", 00:05:16.628 "params": { 00:05:16.628 "timeout_sec": 30 00:05:16.628 } 00:05:16.628 }, 00:05:16.628 { 00:05:16.628 "method": "bdev_nvme_set_options", 00:05:16.628 "params": { 00:05:16.628 "action_on_timeout": "none", 00:05:16.628 "allow_accel_sequence": false, 00:05:16.628 "arbitration_burst": 0, 00:05:16.628 "bdev_retry_count": 3, 00:05:16.628 "ctrlr_loss_timeout_sec": 0, 00:05:16.628 "delay_cmd_submit": true, 00:05:16.628 "dhchap_dhgroups": [ 00:05:16.628 "null", 00:05:16.628 "ffdhe2048", 00:05:16.629 "ffdhe3072", 00:05:16.629 "ffdhe4096", 00:05:16.629 "ffdhe6144", 00:05:16.629 "ffdhe8192" 00:05:16.629 ], 00:05:16.629 "dhchap_digests": [ 00:05:16.629 "sha256", 00:05:16.629 "sha384", 00:05:16.629 "sha512" 00:05:16.629 ], 00:05:16.629 "disable_auto_failback": false, 00:05:16.629 "fast_io_fail_timeout_sec": 0, 00:05:16.629 "generate_uuids": false, 00:05:16.629 "high_priority_weight": 0, 00:05:16.629 "io_path_stat": false, 00:05:16.629 "io_queue_requests": 0, 00:05:16.629 "keep_alive_timeout_ms": 10000, 00:05:16.629 "low_priority_weight": 0, 00:05:16.629 "medium_priority_weight": 0, 00:05:16.629 "nvme_adminq_poll_period_us": 10000, 00:05:16.629 "nvme_error_stat": false, 00:05:16.629 "nvme_ioq_poll_period_us": 0, 00:05:16.629 "rdma_cm_event_timeout_ms": 0, 00:05:16.629 "rdma_max_cq_size": 0, 00:05:16.629 "rdma_srq_size": 0, 00:05:16.629 "reconnect_delay_sec": 0, 00:05:16.629 "timeout_admin_us": 0, 00:05:16.629 "timeout_us": 0, 00:05:16.629 "transport_ack_timeout": 0, 00:05:16.629 "transport_retry_count": 4, 00:05:16.629 "transport_tos": 0 00:05:16.629 } 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "method": "bdev_nvme_set_hotplug", 00:05:16.629 "params": { 00:05:16.629 "enable": false, 00:05:16.629 "period_us": 100000 00:05:16.629 } 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "method": "bdev_wait_for_examine" 00:05:16.629 } 00:05:16.629 ] 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "subsystem": "scsi", 00:05:16.629 "config": null 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "subsystem": "scheduler", 00:05:16.629 "config": [ 00:05:16.629 { 00:05:16.629 "method": "framework_set_scheduler", 00:05:16.629 "params": { 00:05:16.629 "name": "static" 00:05:16.629 } 00:05:16.629 } 00:05:16.629 ] 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "subsystem": "vhost_scsi", 00:05:16.629 "config": [] 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "subsystem": "vhost_blk", 00:05:16.629 "config": [] 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "subsystem": "ublk", 00:05:16.629 "config": [] 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "subsystem": "nbd", 00:05:16.629 "config": [] 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "subsystem": "nvmf", 00:05:16.629 "config": [ 00:05:16.629 { 00:05:16.629 "method": "nvmf_set_config", 00:05:16.629 "params": { 00:05:16.629 "admin_cmd_passthru": { 00:05:16.629 "identify_ctrlr": false 00:05:16.629 }, 00:05:16.629 "discovery_filter": "match_any" 00:05:16.629 } 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "method": "nvmf_set_max_subsystems", 00:05:16.629 "params": { 00:05:16.629 "max_subsystems": 1024 00:05:16.629 } 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "method": "nvmf_set_crdt", 00:05:16.629 "params": { 00:05:16.629 "crdt1": 0, 00:05:16.629 "crdt2": 0, 00:05:16.629 "crdt3": 0 00:05:16.629 } 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "method": "nvmf_create_transport", 00:05:16.629 "params": { 00:05:16.629 "abort_timeout_sec": 1, 00:05:16.629 "ack_timeout": 0, 00:05:16.629 "buf_cache_size": 4294967295, 00:05:16.629 "c2h_success": true, 00:05:16.629 "data_wr_pool_size": 0, 00:05:16.629 "dif_insert_or_strip": false, 00:05:16.629 "in_capsule_data_size": 4096, 00:05:16.629 "io_unit_size": 131072, 00:05:16.629 "max_aq_depth": 128, 00:05:16.629 "max_io_qpairs_per_ctrlr": 127, 00:05:16.629 "max_io_size": 131072, 00:05:16.629 "max_queue_depth": 128, 00:05:16.629 "num_shared_buffers": 511, 00:05:16.629 "sock_priority": 0, 00:05:16.629 "trtype": "TCP", 00:05:16.629 "zcopy": false 00:05:16.629 } 00:05:16.629 } 00:05:16.629 ] 00:05:16.629 }, 00:05:16.629 { 00:05:16.629 "subsystem": "iscsi", 00:05:16.629 "config": [ 00:05:16.629 { 00:05:16.629 "method": "iscsi_set_options", 00:05:16.629 "params": { 00:05:16.629 "allow_duplicated_isid": false, 00:05:16.629 "chap_group": 0, 00:05:16.629 "data_out_pool_size": 2048, 00:05:16.629 "default_time2retain": 20, 00:05:16.629 "default_time2wait": 2, 00:05:16.629 "disable_chap": false, 00:05:16.629 "error_recovery_level": 0, 00:05:16.629 "first_burst_length": 8192, 00:05:16.629 "immediate_data": true, 00:05:16.629 "immediate_data_pool_size": 16384, 00:05:16.629 "max_connections_per_session": 2, 00:05:16.629 "max_large_datain_per_connection": 64, 00:05:16.629 "max_queue_depth": 64, 00:05:16.629 "max_r2t_per_connection": 4, 00:05:16.629 "max_sessions": 128, 00:05:16.629 "mutual_chap": false, 00:05:16.629 "node_base": "iqn.2016-06.io.spdk", 00:05:16.629 "nop_in_interval": 30, 00:05:16.629 "nop_timeout": 60, 00:05:16.629 "pdu_pool_size": 36864, 00:05:16.629 "require_chap": false 00:05:16.629 } 00:05:16.629 } 00:05:16.629 ] 00:05:16.629 } 00:05:16.629 ] 00:05:16.629 } 00:05:16.629 22:52:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:16.629 22:52:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60407 00:05:16.629 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 60407 ']' 00:05:16.629 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 60407 00:05:16.629 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:16.629 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:16.629 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60407 00:05:16.629 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:16.629 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:16.629 killing process with pid 60407 00:05:16.630 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60407' 00:05:16.630 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 60407 00:05:16.630 22:52:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 60407 00:05:16.890 22:52:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60441 00:05:16.890 22:52:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:16.890 22:52:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60441 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 60441 ']' 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 60441 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60441 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.160 killing process with pid 60441 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60441' 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 60441 00:05:22.160 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 60441 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.418 00:05:22.418 real 0m6.902s 00:05:22.418 user 0m6.801s 00:05:22.418 sys 0m0.500s 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.418 ************************************ 00:05:22.418 END TEST skip_rpc_with_json 00:05:22.418 ************************************ 00:05:22.418 22:52:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:22.418 22:52:34 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.418 22:52:34 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.418 22:52:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.418 ************************************ 00:05:22.418 START TEST skip_rpc_with_delay 00:05:22.418 ************************************ 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.418 [2024-05-14 22:52:34.710643] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:22.418 [2024-05-14 22:52:34.710798] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.418 00:05:22.418 real 0m0.088s 00:05:22.418 user 0m0.054s 00:05:22.418 sys 0m0.033s 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.418 22:52:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:22.418 ************************************ 00:05:22.418 END TEST skip_rpc_with_delay 00:05:22.418 ************************************ 00:05:22.418 22:52:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:22.418 22:52:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:22.418 22:52:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:22.418 22:52:34 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.418 22:52:34 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.418 22:52:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.418 ************************************ 00:05:22.419 START TEST exit_on_failed_rpc_init 00:05:22.419 ************************************ 00:05:22.419 22:52:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:22.419 22:52:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60556 00:05:22.419 22:52:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.419 22:52:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 60556 00:05:22.419 22:52:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 60556 ']' 00:05:22.419 22:52:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.419 22:52:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:22.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.419 22:52:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.419 22:52:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:22.419 22:52:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.676 [2024-05-14 22:52:34.855528] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:22.676 [2024-05-14 22:52:34.855627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60556 ] 00:05:22.676 [2024-05-14 22:52:34.984168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.676 [2024-05-14 22:52:35.043612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:23.612 22:52:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.612 [2024-05-14 22:52:35.909342] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:23.612 [2024-05-14 22:52:35.909442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60586 ] 00:05:23.870 [2024-05-14 22:52:36.050961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.870 [2024-05-14 22:52:36.121632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.870 [2024-05-14 22:52:36.121731] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:23.870 [2024-05-14 22:52:36.121749] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:23.870 [2024-05-14 22:52:36.121773] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 60556 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 60556 ']' 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 60556 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.870 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60556 00:05:24.128 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.128 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.128 killing process with pid 60556 00:05:24.128 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60556' 00:05:24.128 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 60556 00:05:24.128 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 60556 00:05:24.386 00:05:24.386 real 0m1.758s 00:05:24.386 user 0m2.192s 00:05:24.386 sys 0m0.318s 00:05:24.386 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.386 22:52:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.386 ************************************ 00:05:24.386 END TEST exit_on_failed_rpc_init 00:05:24.387 ************************************ 00:05:24.387 22:52:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.387 00:05:24.387 real 0m14.368s 00:05:24.387 user 0m14.182s 00:05:24.387 sys 0m1.216s 00:05:24.387 22:52:36 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.387 22:52:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.387 ************************************ 00:05:24.387 END TEST skip_rpc 00:05:24.387 ************************************ 00:05:24.387 22:52:36 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:24.387 22:52:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.387 22:52:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.387 22:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:24.387 ************************************ 00:05:24.387 START TEST rpc_client 00:05:24.387 ************************************ 00:05:24.387 22:52:36 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:24.387 * Looking for test storage... 00:05:24.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:24.387 22:52:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:24.387 OK 00:05:24.387 22:52:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:24.387 00:05:24.387 real 0m0.100s 00:05:24.387 user 0m0.044s 00:05:24.387 sys 0m0.060s 00:05:24.387 22:52:36 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.387 22:52:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:24.387 ************************************ 00:05:24.387 END TEST rpc_client 00:05:24.387 ************************************ 00:05:24.645 22:52:36 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:24.645 22:52:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.645 22:52:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.645 22:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:24.645 ************************************ 00:05:24.645 START TEST json_config 00:05:24.645 ************************************ 00:05:24.645 22:52:36 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:24.645 22:52:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.645 22:52:36 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.645 22:52:36 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.645 22:52:36 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.645 22:52:36 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.645 22:52:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.645 22:52:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.646 22:52:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.646 22:52:36 json_config -- paths/export.sh@5 -- # export PATH 00:05:24.646 22:52:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.646 22:52:36 json_config -- nvmf/common.sh@47 -- # : 0 00:05:24.646 22:52:36 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.646 22:52:36 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.646 22:52:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.646 22:52:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.646 22:52:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.646 22:52:36 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.646 22:52:36 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.646 22:52:36 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.646 INFO: JSON configuration test init 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:24.646 22:52:36 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:24.646 22:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:24.646 22:52:36 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:24.646 22:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.646 22:52:36 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:24.646 22:52:36 json_config -- json_config/common.sh@9 -- # local app=target 00:05:24.646 22:52:36 json_config -- json_config/common.sh@10 -- # shift 00:05:24.646 22:52:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.646 22:52:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.646 22:52:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.646 22:52:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.646 22:52:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.646 22:52:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60704 00:05:24.646 Waiting for target to run... 00:05:24.646 22:52:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.646 22:52:36 json_config -- json_config/common.sh@25 -- # waitforlisten 60704 /var/tmp/spdk_tgt.sock 00:05:24.646 22:52:36 json_config -- common/autotest_common.sh@827 -- # '[' -z 60704 ']' 00:05:24.646 22:52:36 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:24.646 22:52:36 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.646 22:52:36 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.646 22:52:36 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.646 22:52:36 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.646 22:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.646 [2024-05-14 22:52:36.934265] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:24.646 [2024-05-14 22:52:36.934363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60704 ] 00:05:24.904 [2024-05-14 22:52:37.241680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.904 [2024-05-14 22:52:37.287562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.839 22:52:37 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.839 22:52:37 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:25.839 00:05:25.839 22:52:37 json_config -- json_config/common.sh@26 -- # echo '' 00:05:25.839 22:52:37 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:25.839 22:52:37 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:25.839 22:52:37 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:25.839 22:52:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.839 22:52:37 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:25.839 22:52:37 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:25.839 22:52:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.839 22:52:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.839 22:52:37 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:25.839 22:52:37 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:25.839 22:52:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:26.096 22:52:38 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:26.096 22:52:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:26.096 22:52:38 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:26.096 22:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.096 22:52:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:26.096 22:52:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:26.096 22:52:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:26.096 22:52:38 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:26.096 22:52:38 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:26.096 22:52:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:26.354 22:52:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.354 22:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:26.354 22:52:38 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:26.354 22:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:26.354 22:52:38 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:26.354 22:52:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:26.612 MallocForNvmf0 00:05:26.612 22:52:38 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:26.612 22:52:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:26.872 MallocForNvmf1 00:05:26.872 22:52:39 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:26.872 22:52:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:27.160 [2024-05-14 22:52:39.438867] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.160 22:52:39 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:27.160 22:52:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:27.437 22:52:39 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:27.437 22:52:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:27.695 22:52:39 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:27.695 22:52:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:27.953 22:52:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:27.953 22:52:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:28.211 [2024-05-14 22:52:40.499323] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:28.211 [2024-05-14 22:52:40.499572] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:28.211 22:52:40 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:28.211 22:52:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.211 22:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.211 22:52:40 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:28.211 22:52:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.211 22:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.211 22:52:40 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:28.211 22:52:40 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:28.211 22:52:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:28.469 MallocBdevForConfigChangeCheck 00:05:28.469 22:52:40 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:28.469 22:52:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.469 22:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.728 22:52:40 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:28.728 22:52:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.986 INFO: shutting down applications... 00:05:28.986 22:52:41 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:28.986 22:52:41 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:28.986 22:52:41 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:28.986 22:52:41 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:28.986 22:52:41 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:29.243 Calling clear_iscsi_subsystem 00:05:29.243 Calling clear_nvmf_subsystem 00:05:29.243 Calling clear_nbd_subsystem 00:05:29.243 Calling clear_ublk_subsystem 00:05:29.243 Calling clear_vhost_blk_subsystem 00:05:29.243 Calling clear_vhost_scsi_subsystem 00:05:29.243 Calling clear_bdev_subsystem 00:05:29.243 22:52:41 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:29.243 22:52:41 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:29.243 22:52:41 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:29.243 22:52:41 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.243 22:52:41 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:29.243 22:52:41 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:29.808 22:52:42 json_config -- json_config/json_config.sh@345 -- # break 00:05:29.808 22:52:42 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:29.808 22:52:42 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:29.808 22:52:42 json_config -- json_config/common.sh@31 -- # local app=target 00:05:29.808 22:52:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:29.808 22:52:42 json_config -- json_config/common.sh@35 -- # [[ -n 60704 ]] 00:05:29.808 22:52:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60704 00:05:29.808 [2024-05-14 22:52:42.007130] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:29.808 22:52:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:29.808 22:52:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.808 22:52:42 json_config -- json_config/common.sh@41 -- # kill -0 60704 00:05:29.808 22:52:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.375 22:52:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.375 22:52:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.375 22:52:42 json_config -- json_config/common.sh@41 -- # kill -0 60704 00:05:30.375 22:52:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:30.375 22:52:42 json_config -- json_config/common.sh@43 -- # break 00:05:30.375 22:52:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:30.375 22:52:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:30.375 SPDK target shutdown done 00:05:30.375 INFO: relaunching applications... 00:05:30.375 22:52:42 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:30.375 22:52:42 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:30.375 22:52:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:30.375 22:52:42 json_config -- json_config/common.sh@10 -- # shift 00:05:30.375 22:52:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.375 22:52:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.375 22:52:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.375 22:52:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.375 22:52:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.375 22:52:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60973 00:05:30.375 22:52:42 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:30.375 Waiting for target to run... 00:05:30.375 22:52:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.375 22:52:42 json_config -- json_config/common.sh@25 -- # waitforlisten 60973 /var/tmp/spdk_tgt.sock 00:05:30.375 22:52:42 json_config -- common/autotest_common.sh@827 -- # '[' -z 60973 ']' 00:05:30.375 22:52:42 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.375 22:52:42 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:30.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.375 22:52:42 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.375 22:52:42 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:30.375 22:52:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.375 [2024-05-14 22:52:42.594533] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:30.375 [2024-05-14 22:52:42.594665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60973 ] 00:05:30.634 [2024-05-14 22:52:42.914872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.634 [2024-05-14 22:52:42.962365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.891 [2024-05-14 22:52:43.250198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.148 [2024-05-14 22:52:43.282166] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:31.148 [2024-05-14 22:52:43.282434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:31.407 22:52:43 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:31.407 22:52:43 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:31.407 00:05:31.407 22:52:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:31.407 22:52:43 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:31.407 INFO: Checking if target configuration is the same... 00:05:31.407 22:52:43 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:31.407 22:52:43 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.407 22:52:43 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:31.407 22:52:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.407 + '[' 2 -ne 2 ']' 00:05:31.407 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:31.407 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:31.407 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:31.407 +++ basename /dev/fd/62 00:05:31.407 ++ mktemp /tmp/62.XXX 00:05:31.407 + tmp_file_1=/tmp/62.DmG 00:05:31.407 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.407 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:31.407 + tmp_file_2=/tmp/spdk_tgt_config.json.srS 00:05:31.407 + ret=0 00:05:31.407 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:31.665 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:31.665 + diff -u /tmp/62.DmG /tmp/spdk_tgt_config.json.srS 00:05:31.665 INFO: JSON config files are the same 00:05:31.665 + echo 'INFO: JSON config files are the same' 00:05:31.665 + rm /tmp/62.DmG /tmp/spdk_tgt_config.json.srS 00:05:31.665 + exit 0 00:05:31.665 22:52:44 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:31.665 INFO: changing configuration and checking if this can be detected... 00:05:31.665 22:52:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:31.665 22:52:44 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:31.665 22:52:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.232 22:52:44 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.232 22:52:44 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:32.232 22:52:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.232 + '[' 2 -ne 2 ']' 00:05:32.232 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:32.232 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:32.232 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:32.232 +++ basename /dev/fd/62 00:05:32.232 ++ mktemp /tmp/62.XXX 00:05:32.232 + tmp_file_1=/tmp/62.9RY 00:05:32.232 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.232 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.232 + tmp_file_2=/tmp/spdk_tgt_config.json.Og3 00:05:32.232 + ret=0 00:05:32.232 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:32.537 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:32.537 + diff -u /tmp/62.9RY /tmp/spdk_tgt_config.json.Og3 00:05:32.537 + ret=1 00:05:32.537 + echo '=== Start of file: /tmp/62.9RY ===' 00:05:32.537 + cat /tmp/62.9RY 00:05:32.537 + echo '=== End of file: /tmp/62.9RY ===' 00:05:32.537 + echo '' 00:05:32.537 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Og3 ===' 00:05:32.537 + cat /tmp/spdk_tgt_config.json.Og3 00:05:32.537 + echo '=== End of file: /tmp/spdk_tgt_config.json.Og3 ===' 00:05:32.537 + echo '' 00:05:32.537 + rm /tmp/62.9RY /tmp/spdk_tgt_config.json.Og3 00:05:32.537 + exit 1 00:05:32.537 INFO: configuration change detected. 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@317 -- # [[ -n 60973 ]] 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.537 22:52:44 json_config -- json_config/json_config.sh@323 -- # killprocess 60973 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@946 -- # '[' -z 60973 ']' 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@950 -- # kill -0 60973 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@951 -- # uname 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:32.537 22:52:44 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60973 00:05:32.796 22:52:44 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:32.796 22:52:44 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:32.796 killing process with pid 60973 00:05:32.796 22:52:44 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60973' 00:05:32.796 22:52:44 json_config -- common/autotest_common.sh@965 -- # kill 60973 00:05:32.796 [2024-05-14 22:52:44.915501] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:32.796 22:52:44 json_config -- common/autotest_common.sh@970 -- # wait 60973 00:05:32.796 22:52:45 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.796 22:52:45 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:32.796 22:52:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.796 22:52:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.796 22:52:45 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:32.796 INFO: Success 00:05:32.796 22:52:45 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:32.796 00:05:32.796 real 0m8.385s 00:05:32.796 user 0m12.321s 00:05:32.796 sys 0m1.504s 00:05:32.796 22:52:45 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.796 22:52:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.796 ************************************ 00:05:32.796 END TEST json_config 00:05:32.796 ************************************ 00:05:33.055 22:52:45 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:33.055 22:52:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.055 22:52:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.055 22:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:33.055 ************************************ 00:05:33.055 START TEST json_config_extra_key 00:05:33.055 ************************************ 00:05:33.055 22:52:45 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:33.055 22:52:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.055 22:52:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.055 22:52:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.055 22:52:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.055 22:52:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.055 22:52:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.055 22:52:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:33.055 22:52:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:33.055 22:52:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.055 INFO: launching applications... 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:33.055 22:52:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61150 00:05:33.055 Waiting for target to run... 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61150 /var/tmp/spdk_tgt.sock 00:05:33.055 22:52:45 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 61150 ']' 00:05:33.055 22:52:45 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.055 22:52:45 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:33.055 22:52:45 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:33.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.055 22:52:45 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.055 22:52:45 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:33.055 22:52:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.056 [2024-05-14 22:52:45.370978] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:33.056 [2024-05-14 22:52:45.371078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61150 ] 00:05:33.314 [2024-05-14 22:52:45.679587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.573 [2024-05-14 22:52:45.727483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.141 22:52:46 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:34.141 00:05:34.141 22:52:46 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:34.141 22:52:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:34.141 INFO: shutting down applications... 00:05:34.141 22:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:34.141 22:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:34.141 22:52:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:34.141 22:52:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:34.141 22:52:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61150 ]] 00:05:34.141 22:52:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61150 00:05:34.141 22:52:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:34.141 22:52:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.141 22:52:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61150 00:05:34.141 22:52:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.708 22:52:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.708 22:52:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.708 22:52:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61150 00:05:34.708 22:52:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.708 22:52:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:34.708 22:52:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.708 22:52:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.708 SPDK target shutdown done 00:05:34.708 Success 00:05:34.708 22:52:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:34.708 00:05:34.708 real 0m1.621s 00:05:34.708 user 0m1.510s 00:05:34.708 sys 0m0.322s 00:05:34.708 22:52:46 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.708 22:52:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.708 ************************************ 00:05:34.708 END TEST json_config_extra_key 00:05:34.708 ************************************ 00:05:34.708 22:52:46 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.708 22:52:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.708 22:52:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.708 22:52:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.708 ************************************ 00:05:34.708 START TEST alias_rpc 00:05:34.708 ************************************ 00:05:34.708 22:52:46 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.708 * Looking for test storage... 00:05:34.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:34.708 22:52:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:34.708 22:52:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61227 00:05:34.708 22:52:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61227 00:05:34.708 22:52:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.708 22:52:46 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 61227 ']' 00:05:34.708 22:52:46 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.708 22:52:46 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:34.708 22:52:46 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.708 22:52:46 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:34.708 22:52:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.708 [2024-05-14 22:52:47.046479] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:34.708 [2024-05-14 22:52:47.046574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61227 ] 00:05:34.967 [2024-05-14 22:52:47.184979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.967 [2024-05-14 22:52:47.244097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.899 22:52:48 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:35.899 22:52:48 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:35.899 22:52:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:36.158 22:52:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61227 00:05:36.158 22:52:48 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 61227 ']' 00:05:36.158 22:52:48 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 61227 00:05:36.158 22:52:48 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:36.158 22:52:48 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.158 22:52:48 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61227 00:05:36.158 22:52:48 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.158 killing process with pid 61227 00:05:36.158 22:52:48 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.158 22:52:48 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61227' 00:05:36.158 22:52:48 alias_rpc -- common/autotest_common.sh@965 -- # kill 61227 00:05:36.158 22:52:48 alias_rpc -- common/autotest_common.sh@970 -- # wait 61227 00:05:36.417 00:05:36.417 real 0m1.784s 00:05:36.417 user 0m2.205s 00:05:36.417 sys 0m0.333s 00:05:36.417 22:52:48 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.417 22:52:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.417 ************************************ 00:05:36.417 END TEST alias_rpc 00:05:36.417 ************************************ 00:05:36.417 22:52:48 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:05:36.417 22:52:48 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.417 22:52:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.417 22:52:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.417 22:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.417 ************************************ 00:05:36.417 START TEST dpdk_mem_utility 00:05:36.417 ************************************ 00:05:36.417 22:52:48 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.676 * Looking for test storage... 00:05:36.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:36.676 22:52:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:36.676 22:52:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61313 00:05:36.676 22:52:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.676 22:52:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61313 00:05:36.676 22:52:48 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 61313 ']' 00:05:36.676 22:52:48 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.676 22:52:48 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.676 22:52:48 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.676 22:52:48 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.676 22:52:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.676 [2024-05-14 22:52:48.914689] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:36.676 [2024-05-14 22:52:48.914789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61313 ] 00:05:36.676 [2024-05-14 22:52:49.052803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.936 [2024-05-14 22:52:49.113847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.502 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.502 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:37.502 22:52:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:37.502 22:52:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:37.502 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.502 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.502 { 00:05:37.502 "filename": "/tmp/spdk_mem_dump.txt" 00:05:37.502 } 00:05:37.503 22:52:49 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.503 22:52:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:37.761 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:37.761 1 heaps totaling size 814.000000 MiB 00:05:37.761 size: 814.000000 MiB heap id: 0 00:05:37.761 end heaps---------- 00:05:37.761 8 mempools totaling size 598.116089 MiB 00:05:37.761 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:37.761 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:37.761 size: 84.521057 MiB name: bdev_io_61313 00:05:37.761 size: 51.011292 MiB name: evtpool_61313 00:05:37.761 size: 50.003479 MiB name: msgpool_61313 00:05:37.761 size: 21.763794 MiB name: PDU_Pool 00:05:37.761 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:37.761 size: 0.026123 MiB name: Session_Pool 00:05:37.762 end mempools------- 00:05:37.762 6 memzones totaling size 4.142822 MiB 00:05:37.762 size: 1.000366 MiB name: RG_ring_0_61313 00:05:37.762 size: 1.000366 MiB name: RG_ring_1_61313 00:05:37.762 size: 1.000366 MiB name: RG_ring_4_61313 00:05:37.762 size: 1.000366 MiB name: RG_ring_5_61313 00:05:37.762 size: 0.125366 MiB name: RG_ring_2_61313 00:05:37.762 size: 0.015991 MiB name: RG_ring_3_61313 00:05:37.762 end memzones------- 00:05:37.762 22:52:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.762 heap id: 0 total size: 814.000000 MiB number of busy elements: 236 number of free elements: 15 00:05:37.762 list of free elements. size: 12.483643 MiB 00:05:37.762 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:37.762 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:37.762 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:37.762 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:37.762 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:37.762 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:37.762 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:37.762 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:37.762 element at address: 0x200000200000 with size: 0.836853 MiB 00:05:37.762 element at address: 0x20001aa00000 with size: 0.571167 MiB 00:05:37.762 element at address: 0x20000b200000 with size: 0.489258 MiB 00:05:37.762 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:37.762 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:37.762 element at address: 0x200027e00000 with size: 0.397949 MiB 00:05:37.762 element at address: 0x200003a00000 with size: 0.350769 MiB 00:05:37.762 list of standard malloc elements. size: 199.253784 MiB 00:05:37.762 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:37.762 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:37.762 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:37.762 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:37.762 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:37.762 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:37.762 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:37.762 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:37.762 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:37.762 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:37.762 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:37.763 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e65e00 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6cac0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:37.763 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:37.763 list of memzone associated elements. size: 602.262573 MiB 00:05:37.763 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:37.763 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.763 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:37.763 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.763 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:37.763 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61313_0 00:05:37.763 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:37.763 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61313_0 00:05:37.763 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:37.763 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61313_0 00:05:37.763 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:37.763 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.763 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:37.763 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.763 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:37.763 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61313 00:05:37.763 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:37.763 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61313 00:05:37.763 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:37.763 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61313 00:05:37.763 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:37.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.763 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:37.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.763 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:37.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.763 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:37.763 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.763 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:37.763 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61313 00:05:37.763 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:37.763 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61313 00:05:37.763 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:37.763 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61313 00:05:37.763 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:37.763 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61313 00:05:37.763 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:37.763 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61313 00:05:37.763 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:37.763 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.763 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:37.763 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.763 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:37.763 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.763 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:37.763 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61313 00:05:37.763 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:37.763 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.763 element at address: 0x200027e65f80 with size: 0.023743 MiB 00:05:37.763 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.763 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:37.763 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61313 00:05:37.763 element at address: 0x200027e6c0c0 with size: 0.002441 MiB 00:05:37.763 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.763 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:37.763 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61313 00:05:37.763 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:37.763 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61313 00:05:37.763 element at address: 0x200027e6cb80 with size: 0.000305 MiB 00:05:37.763 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.763 22:52:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.763 22:52:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61313 00:05:37.763 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 61313 ']' 00:05:37.763 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 61313 00:05:37.763 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:37.763 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:37.763 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61313 00:05:37.763 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:37.763 killing process with pid 61313 00:05:37.763 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:37.763 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61313' 00:05:37.763 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 61313 00:05:37.763 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 61313 00:05:38.024 00:05:38.024 real 0m1.619s 00:05:38.024 user 0m1.897s 00:05:38.024 sys 0m0.336s 00:05:38.024 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.024 ************************************ 00:05:38.024 END TEST dpdk_mem_utility 00:05:38.024 ************************************ 00:05:38.024 22:52:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.024 22:52:50 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:38.024 22:52:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.024 22:52:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.024 22:52:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.024 ************************************ 00:05:38.024 START TEST event 00:05:38.024 ************************************ 00:05:38.024 22:52:50 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:38.281 * Looking for test storage... 00:05:38.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:38.281 22:52:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:38.281 22:52:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:38.281 22:52:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.281 22:52:50 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:38.281 22:52:50 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.281 22:52:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.281 ************************************ 00:05:38.281 START TEST event_perf 00:05:38.281 ************************************ 00:05:38.281 22:52:50 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:38.281 Running I/O for 1 seconds...[2024-05-14 22:52:50.525361] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:38.281 [2024-05-14 22:52:50.525468] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61408 ] 00:05:38.281 [2024-05-14 22:52:50.663570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.537 [2024-05-14 22:52:50.727620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.537 [2024-05-14 22:52:50.727745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.537 [2024-05-14 22:52:50.727894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.537 Running I/O for 1 seconds...[2024-05-14 22:52:50.728037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.471 00:05:39.471 lcore 0: 187923 00:05:39.471 lcore 1: 187923 00:05:39.471 lcore 2: 187922 00:05:39.471 lcore 3: 187923 00:05:39.471 done. 00:05:39.471 00:05:39.471 real 0m1.322s 00:05:39.471 user 0m4.145s 00:05:39.471 sys 0m0.054s 00:05:39.471 22:52:51 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.471 22:52:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.471 ************************************ 00:05:39.471 END TEST event_perf 00:05:39.471 ************************************ 00:05:39.729 22:52:51 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:39.729 22:52:51 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:39.729 22:52:51 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.729 22:52:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.729 ************************************ 00:05:39.729 START TEST event_reactor 00:05:39.729 ************************************ 00:05:39.729 22:52:51 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:39.729 [2024-05-14 22:52:51.894245] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:39.729 [2024-05-14 22:52:51.894359] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61441 ] 00:05:39.729 [2024-05-14 22:52:52.035531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.729 [2024-05-14 22:52:52.089895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.103 test_start 00:05:41.103 oneshot 00:05:41.103 tick 100 00:05:41.103 tick 100 00:05:41.103 tick 250 00:05:41.103 tick 100 00:05:41.103 tick 100 00:05:41.103 tick 100 00:05:41.103 tick 250 00:05:41.103 tick 500 00:05:41.103 tick 100 00:05:41.103 tick 100 00:05:41.103 tick 250 00:05:41.103 tick 100 00:05:41.103 tick 100 00:05:41.103 test_end 00:05:41.103 00:05:41.103 real 0m1.307s 00:05:41.103 user 0m1.161s 00:05:41.103 sys 0m0.037s 00:05:41.103 22:52:53 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.103 ************************************ 00:05:41.103 END TEST event_reactor 00:05:41.103 ************************************ 00:05:41.103 22:52:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:41.103 22:52:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:41.103 22:52:53 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:41.103 22:52:53 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.103 22:52:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.103 ************************************ 00:05:41.103 START TEST event_reactor_perf 00:05:41.103 ************************************ 00:05:41.103 22:52:53 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:41.103 [2024-05-14 22:52:53.250422] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:41.103 [2024-05-14 22:52:53.250993] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61477 ] 00:05:41.103 [2024-05-14 22:52:53.387012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.103 [2024-05-14 22:52:53.444707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.476 test_start 00:05:42.476 test_end 00:05:42.476 Performance: 356742 events per second 00:05:42.476 00:05:42.476 real 0m1.313s 00:05:42.476 user 0m1.160s 00:05:42.476 sys 0m0.045s 00:05:42.476 22:52:54 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.476 22:52:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.476 ************************************ 00:05:42.476 END TEST event_reactor_perf 00:05:42.476 ************************************ 00:05:42.476 22:52:54 event -- event/event.sh@49 -- # uname -s 00:05:42.476 22:52:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:42.476 22:52:54 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:42.476 22:52:54 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.476 22:52:54 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.476 22:52:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.476 ************************************ 00:05:42.476 START TEST event_scheduler 00:05:42.476 ************************************ 00:05:42.476 22:52:54 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:42.476 * Looking for test storage... 00:05:42.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:42.476 22:52:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:42.476 22:52:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61538 00:05:42.476 22:52:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:42.476 22:52:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.476 22:52:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61538 00:05:42.476 22:52:54 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 61538 ']' 00:05:42.476 22:52:54 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.476 22:52:54 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.476 22:52:54 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.476 22:52:54 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.476 22:52:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.476 [2024-05-14 22:52:54.734971] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:42.476 [2024-05-14 22:52:54.735079] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61538 ] 00:05:42.734 [2024-05-14 22:52:54.876519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.734 [2024-05-14 22:52:54.953428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.734 [2024-05-14 22:52:54.953565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.734 [2024-05-14 22:52:54.954601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.734 [2024-05-14 22:52:54.954608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:43.669 22:52:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.669 POWER: Env isn't set yet! 00:05:43.669 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:43.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:43.669 POWER: Cannot set governor of lcore 0 to userspace 00:05:43.669 POWER: Attempting to initialise PSTAT power management... 00:05:43.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:43.669 POWER: Cannot set governor of lcore 0 to performance 00:05:43.669 POWER: Attempting to initialise AMD PSTATE power management... 00:05:43.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:43.669 POWER: Cannot set governor of lcore 0 to userspace 00:05:43.669 POWER: Attempting to initialise CPPC power management... 00:05:43.669 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:43.669 POWER: Cannot set governor of lcore 0 to userspace 00:05:43.669 POWER: Attempting to initialise VM power management... 00:05:43.669 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:43.669 POWER: Unable to set Power Management Environment for lcore 0 00:05:43.669 [2024-05-14 22:52:55.719717] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:43.669 [2024-05-14 22:52:55.719732] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:43.669 [2024-05-14 22:52:55.719740] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.669 22:52:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.669 [2024-05-14 22:52:55.776458] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.669 22:52:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.669 22:52:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:43.669 ************************************ 00:05:43.669 START TEST scheduler_create_thread 00:05:43.669 ************************************ 00:05:43.669 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:43.669 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:43.669 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.669 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.669 2 00:05:43.669 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 3 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 4 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 5 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 6 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 7 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 8 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 9 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 10 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.670 22:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.605 22:52:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.605 00:05:44.605 real 0m1.173s 00:05:44.605 user 0m0.013s 00:05:44.605 sys 0m0.010s 00:05:44.605 22:52:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.605 ************************************ 00:05:44.605 END TEST scheduler_create_thread 00:05:44.605 ************************************ 00:05:44.605 22:52:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.864 22:52:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:44.864 22:52:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61538 00:05:44.864 22:52:56 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 61538 ']' 00:05:44.864 22:52:56 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 61538 00:05:44.864 22:52:56 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:44.864 22:52:57 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.864 22:52:57 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61538 00:05:44.864 killing process with pid 61538 00:05:44.864 22:52:57 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:44.864 22:52:57 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:44.864 22:52:57 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61538' 00:05:44.864 22:52:57 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 61538 00:05:44.864 22:52:57 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 61538 00:05:45.122 [2024-05-14 22:52:57.438526] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:45.381 ************************************ 00:05:45.381 END TEST event_scheduler 00:05:45.381 ************************************ 00:05:45.381 00:05:45.381 real 0m3.031s 00:05:45.381 user 0m5.642s 00:05:45.381 sys 0m0.296s 00:05:45.381 22:52:57 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.381 22:52:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.381 22:52:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:45.381 22:52:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:45.381 22:52:57 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.381 22:52:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.381 22:52:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.381 ************************************ 00:05:45.381 START TEST app_repeat 00:05:45.381 ************************************ 00:05:45.381 22:52:57 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:45.381 Process app_repeat pid: 61639 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61639 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61639' 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.381 spdk_app_start Round 0 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:45.381 22:52:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61639 /var/tmp/spdk-nbd.sock 00:05:45.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.381 22:52:57 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61639 ']' 00:05:45.381 22:52:57 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.381 22:52:57 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.381 22:52:57 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.381 22:52:57 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.381 22:52:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.381 [2024-05-14 22:52:57.715122] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:45.381 [2024-05-14 22:52:57.715211] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61639 ] 00:05:45.639 [2024-05-14 22:52:57.848482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.639 [2024-05-14 22:52:57.915221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.639 [2024-05-14 22:52:57.915228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.640 22:52:57 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.640 22:52:57 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:45.640 22:52:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.898 Malloc0 00:05:45.898 22:52:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.155 Malloc1 00:05:46.155 22:52:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.155 22:52:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.412 /dev/nbd0 00:05:46.412 22:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.412 22:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.412 1+0 records in 00:05:46.412 1+0 records out 00:05:46.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221801 s, 18.5 MB/s 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:46.412 22:52:58 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:46.412 22:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.412 22:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.412 22:52:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.669 /dev/nbd1 00:05:46.669 22:52:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.669 22:52:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.669 1+0 records in 00:05:46.669 1+0 records out 00:05:46.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334234 s, 12.3 MB/s 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:46.669 22:52:59 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.927 22:52:59 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:46.927 22:52:59 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:46.927 22:52:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.927 22:52:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.927 22:52:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.927 22:52:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.927 22:52:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.185 { 00:05:47.185 "bdev_name": "Malloc0", 00:05:47.185 "nbd_device": "/dev/nbd0" 00:05:47.185 }, 00:05:47.185 { 00:05:47.185 "bdev_name": "Malloc1", 00:05:47.185 "nbd_device": "/dev/nbd1" 00:05:47.185 } 00:05:47.185 ]' 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.185 { 00:05:47.185 "bdev_name": "Malloc0", 00:05:47.185 "nbd_device": "/dev/nbd0" 00:05:47.185 }, 00:05:47.185 { 00:05:47.185 "bdev_name": "Malloc1", 00:05:47.185 "nbd_device": "/dev/nbd1" 00:05:47.185 } 00:05:47.185 ]' 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.185 /dev/nbd1' 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.185 /dev/nbd1' 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.185 256+0 records in 00:05:47.185 256+0 records out 00:05:47.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00714626 s, 147 MB/s 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.185 256+0 records in 00:05:47.185 256+0 records out 00:05:47.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270737 s, 38.7 MB/s 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.185 256+0 records in 00:05:47.185 256+0 records out 00:05:47.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293078 s, 35.8 MB/s 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.185 22:52:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.444 22:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.444 22:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.444 22:52:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.444 22:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.444 22:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.444 22:52:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.444 22:52:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.444 22:52:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.444 22:52:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.444 22:52:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.703 22:53:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.270 22:53:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.270 22:53:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.529 22:53:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.529 [2024-05-14 22:53:00.839956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.529 [2024-05-14 22:53:00.903160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.529 [2024-05-14 22:53:00.903173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.789 [2024-05-14 22:53:00.934697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.789 [2024-05-14 22:53:00.934753] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.320 22:53:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.320 spdk_app_start Round 1 00:05:51.320 22:53:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:51.320 22:53:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61639 /var/tmp/spdk-nbd.sock 00:05:51.320 22:53:03 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61639 ']' 00:05:51.320 22:53:03 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.320 22:53:03 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:51.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.320 22:53:03 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.320 22:53:03 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:51.320 22:53:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.885 22:53:03 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:51.885 22:53:03 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:51.885 22:53:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.885 Malloc0 00:05:51.885 22:53:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.143 Malloc1 00:05:52.143 22:53:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.143 22:53:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.402 /dev/nbd0 00:05:52.402 22:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.402 22:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.402 1+0 records in 00:05:52.402 1+0 records out 00:05:52.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343333 s, 11.9 MB/s 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:52.402 22:53:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:52.402 22:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.402 22:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.402 22:53:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.968 /dev/nbd1 00:05:52.968 22:53:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.968 22:53:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.968 1+0 records in 00:05:52.968 1+0 records out 00:05:52.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408481 s, 10.0 MB/s 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:52.968 22:53:05 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:52.968 22:53:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.968 22:53:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.968 22:53:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.968 22:53:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.968 22:53:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.968 22:53:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.968 { 00:05:52.968 "bdev_name": "Malloc0", 00:05:52.968 "nbd_device": "/dev/nbd0" 00:05:52.968 }, 00:05:52.968 { 00:05:52.968 "bdev_name": "Malloc1", 00:05:52.968 "nbd_device": "/dev/nbd1" 00:05:52.968 } 00:05:52.968 ]' 00:05:52.968 22:53:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.968 { 00:05:52.968 "bdev_name": "Malloc0", 00:05:52.968 "nbd_device": "/dev/nbd0" 00:05:52.968 }, 00:05:52.968 { 00:05:52.968 "bdev_name": "Malloc1", 00:05:52.968 "nbd_device": "/dev/nbd1" 00:05:52.968 } 00:05:52.968 ]' 00:05:52.968 22:53:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.226 /dev/nbd1' 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.226 /dev/nbd1' 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.226 256+0 records in 00:05:53.226 256+0 records out 00:05:53.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00668943 s, 157 MB/s 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.226 256+0 records in 00:05:53.226 256+0 records out 00:05:53.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266246 s, 39.4 MB/s 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.226 256+0 records in 00:05:53.226 256+0 records out 00:05:53.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316698 s, 33.1 MB/s 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.226 22:53:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.227 22:53:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.485 22:53:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.485 22:53:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.485 22:53:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.485 22:53:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.485 22:53:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.485 22:53:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.485 22:53:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.485 22:53:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.485 22:53:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.485 22:53:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.743 22:53:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.001 22:53:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.001 22:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.001 22:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.259 22:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.259 22:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.259 22:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.259 22:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.259 22:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.259 22:53:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.259 22:53:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.259 22:53:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.259 22:53:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.259 22:53:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.517 22:53:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.774 [2024-05-14 22:53:06.936003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.774 [2024-05-14 22:53:07.001729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.774 [2024-05-14 22:53:07.001746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.774 [2024-05-14 22:53:07.036529] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.774 [2024-05-14 22:53:07.036593] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.056 22:53:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.056 spdk_app_start Round 2 00:05:58.056 22:53:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:58.056 22:53:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61639 /var/tmp/spdk-nbd.sock 00:05:58.056 22:53:09 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61639 ']' 00:05:58.056 22:53:09 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.056 22:53:09 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.056 22:53:09 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.056 22:53:09 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.056 22:53:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.056 22:53:10 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.056 22:53:10 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:58.056 22:53:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.056 Malloc0 00:05:58.056 22:53:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.315 Malloc1 00:05:58.315 22:53:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.315 22:53:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.573 /dev/nbd0 00:05:58.573 22:53:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.573 22:53:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.573 1+0 records in 00:05:58.573 1+0 records out 00:05:58.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435381 s, 9.4 MB/s 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:58.573 22:53:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:58.573 22:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.573 22:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.573 22:53:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.832 /dev/nbd1 00:05:58.832 22:53:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.091 22:53:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.091 1+0 records in 00:05:59.091 1+0 records out 00:05:59.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305587 s, 13.4 MB/s 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:59.091 22:53:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:59.091 22:53:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.091 22:53:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.091 22:53:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.091 22:53:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.091 22:53:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.349 { 00:05:59.349 "bdev_name": "Malloc0", 00:05:59.349 "nbd_device": "/dev/nbd0" 00:05:59.349 }, 00:05:59.349 { 00:05:59.349 "bdev_name": "Malloc1", 00:05:59.349 "nbd_device": "/dev/nbd1" 00:05:59.349 } 00:05:59.349 ]' 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.349 { 00:05:59.349 "bdev_name": "Malloc0", 00:05:59.349 "nbd_device": "/dev/nbd0" 00:05:59.349 }, 00:05:59.349 { 00:05:59.349 "bdev_name": "Malloc1", 00:05:59.349 "nbd_device": "/dev/nbd1" 00:05:59.349 } 00:05:59.349 ]' 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.349 /dev/nbd1' 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.349 /dev/nbd1' 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.349 256+0 records in 00:05:59.349 256+0 records out 00:05:59.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00802031 s, 131 MB/s 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.349 256+0 records in 00:05:59.349 256+0 records out 00:05:59.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283776 s, 37.0 MB/s 00:05:59.349 22:53:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.350 256+0 records in 00:05:59.350 256+0 records out 00:05:59.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296343 s, 35.4 MB/s 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.350 22:53:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.607 22:53:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.607 22:53:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.607 22:53:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.607 22:53:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.607 22:53:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.607 22:53:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.607 22:53:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.607 22:53:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.607 22:53:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.607 22:53:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.863 22:53:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.863 22:53:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.863 22:53:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.863 22:53:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.863 22:53:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.863 22:53:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.120 22:53:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.120 22:53:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.120 22:53:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.120 22:53:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.120 22:53:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.120 22:53:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.120 22:53:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.120 22:53:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.378 22:53:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.378 22:53:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.378 22:53:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.378 22:53:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.378 22:53:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.378 22:53:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.378 22:53:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.378 22:53:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.378 22:53:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.378 22:53:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.635 22:53:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.635 [2024-05-14 22:53:13.010194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.893 [2024-05-14 22:53:13.069015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.893 [2024-05-14 22:53:13.069025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.893 [2024-05-14 22:53:13.099229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.893 [2024-05-14 22:53:13.099292] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.169 22:53:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61639 /var/tmp/spdk-nbd.sock 00:06:04.169 22:53:15 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 61639 ']' 00:06:04.169 22:53:15 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.169 22:53:15 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.169 22:53:15 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.169 22:53:15 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.169 22:53:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.169 22:53:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.169 22:53:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:04.169 22:53:16 event.app_repeat -- event/event.sh@39 -- # killprocess 61639 00:06:04.169 22:53:16 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 61639 ']' 00:06:04.169 22:53:16 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 61639 00:06:04.169 22:53:16 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:04.169 22:53:16 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.169 22:53:16 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61639 00:06:04.169 22:53:16 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.170 22:53:16 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.170 22:53:16 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61639' 00:06:04.170 killing process with pid 61639 00:06:04.170 22:53:16 event.app_repeat -- common/autotest_common.sh@965 -- # kill 61639 00:06:04.170 22:53:16 event.app_repeat -- common/autotest_common.sh@970 -- # wait 61639 00:06:04.170 spdk_app_start is called in Round 0. 00:06:04.170 Shutdown signal received, stop current app iteration 00:06:04.170 Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 reinitialization... 00:06:04.170 spdk_app_start is called in Round 1. 00:06:04.170 Shutdown signal received, stop current app iteration 00:06:04.170 Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 reinitialization... 00:06:04.170 spdk_app_start is called in Round 2. 00:06:04.170 Shutdown signal received, stop current app iteration 00:06:04.170 Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 reinitialization... 00:06:04.170 spdk_app_start is called in Round 3. 00:06:04.170 Shutdown signal received, stop current app iteration 00:06:04.170 22:53:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:04.170 22:53:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:04.170 00:06:04.170 real 0m18.646s 00:06:04.170 user 0m42.343s 00:06:04.170 sys 0m2.768s 00:06:04.170 22:53:16 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.170 ************************************ 00:06:04.170 END TEST app_repeat 00:06:04.170 ************************************ 00:06:04.170 22:53:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.170 22:53:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:04.170 22:53:16 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:04.170 22:53:16 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.170 22:53:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.170 22:53:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.170 ************************************ 00:06:04.170 START TEST cpu_locks 00:06:04.170 ************************************ 00:06:04.170 22:53:16 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:04.170 * Looking for test storage... 00:06:04.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:04.170 22:53:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.170 22:53:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.170 22:53:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.170 22:53:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.170 22:53:16 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.170 22:53:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.170 22:53:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.170 ************************************ 00:06:04.170 START TEST default_locks 00:06:04.170 ************************************ 00:06:04.170 22:53:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:04.170 22:53:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62256 00:06:04.170 22:53:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62256 00:06:04.170 22:53:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.170 22:53:16 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 62256 ']' 00:06:04.170 22:53:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.170 22:53:16 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.170 22:53:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.170 22:53:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.170 22:53:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.170 [2024-05-14 22:53:16.541667] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:04.170 [2024-05-14 22:53:16.541785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62256 ] 00:06:04.428 [2024-05-14 22:53:16.680918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.428 [2024-05-14 22:53:16.740863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.363 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.363 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:05.363 22:53:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62256 00:06:05.363 22:53:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.363 22:53:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62256 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62256 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 62256 ']' 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 62256 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62256 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:05.621 killing process with pid 62256 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62256' 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 62256 00:06:05.621 22:53:17 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 62256 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62256 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62256 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62256 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 62256 ']' 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.880 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (62256) - No such process 00:06:05.880 ERROR: process (pid: 62256) is no longer running 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.880 00:06:05.880 real 0m1.749s 00:06:05.880 user 0m1.969s 00:06:05.880 sys 0m0.445s 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.880 ************************************ 00:06:05.880 END TEST default_locks 00:06:05.880 ************************************ 00:06:05.880 22:53:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.880 22:53:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:05.880 22:53:18 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.880 22:53:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.880 22:53:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.139 ************************************ 00:06:06.139 START TEST default_locks_via_rpc 00:06:06.139 ************************************ 00:06:06.139 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:06.139 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62309 00:06:06.139 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62309 00:06:06.139 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.139 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62309 ']' 00:06:06.139 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.139 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.139 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.139 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.139 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.139 [2024-05-14 22:53:18.333324] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:06.139 [2024-05-14 22:53:18.333415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62309 ] 00:06:06.139 [2024-05-14 22:53:18.470987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.398 [2024-05-14 22:53:18.542510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62309 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62309 00:06:06.398 22:53:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62309 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 62309 ']' 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 62309 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62309 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.964 killing process with pid 62309 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62309' 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 62309 00:06:06.964 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 62309 00:06:07.223 00:06:07.223 real 0m1.164s 00:06:07.223 user 0m1.207s 00:06:07.223 sys 0m0.406s 00:06:07.223 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.223 22:53:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.223 ************************************ 00:06:07.223 END TEST default_locks_via_rpc 00:06:07.223 ************************************ 00:06:07.223 22:53:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:07.223 22:53:19 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.223 22:53:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.223 22:53:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.223 ************************************ 00:06:07.223 START TEST non_locking_app_on_locked_coremask 00:06:07.223 ************************************ 00:06:07.223 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:07.223 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62370 00:06:07.223 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62370 /var/tmp/spdk.sock 00:06:07.223 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62370 ']' 00:06:07.223 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.223 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.223 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.223 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.223 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.223 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.223 [2024-05-14 22:53:19.552054] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:07.223 [2024-05-14 22:53:19.552159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62370 ] 00:06:07.483 [2024-05-14 22:53:19.691075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.483 [2024-05-14 22:53:19.761742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62379 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62379 /var/tmp/spdk2.sock 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62379 ']' 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.750 22:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.750 [2024-05-14 22:53:20.006200] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:07.750 [2024-05-14 22:53:20.006302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62379 ] 00:06:08.009 [2024-05-14 22:53:20.150425] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.009 [2024-05-14 22:53:20.150507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.009 [2024-05-14 22:53:20.271707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.945 22:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.945 22:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:08.945 22:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62370 00:06:08.945 22:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62370 00:06:08.945 22:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.513 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62370 00:06:09.513 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62370 ']' 00:06:09.513 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62370 00:06:09.513 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:09.513 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.513 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62370 00:06:09.771 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.771 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.771 killing process with pid 62370 00:06:09.771 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62370' 00:06:09.771 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62370 00:06:09.771 22:53:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62370 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62379 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62379 ']' 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62379 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62379 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.339 killing process with pid 62379 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62379' 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62379 00:06:10.339 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62379 00:06:10.598 00:06:10.598 real 0m3.303s 00:06:10.598 user 0m3.809s 00:06:10.598 sys 0m0.933s 00:06:10.598 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.598 22:53:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.598 ************************************ 00:06:10.598 END TEST non_locking_app_on_locked_coremask 00:06:10.598 ************************************ 00:06:10.598 22:53:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:10.598 22:53:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.598 22:53:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.598 22:53:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.598 ************************************ 00:06:10.598 START TEST locking_app_on_unlocked_coremask 00:06:10.598 ************************************ 00:06:10.598 22:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:10.598 22:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62458 00:06:10.598 22:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62458 /var/tmp/spdk.sock 00:06:10.598 22:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:10.598 22:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62458 ']' 00:06:10.598 22:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.598 22:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.598 22:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.598 22:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.598 22:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.598 [2024-05-14 22:53:22.897795] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:10.598 [2024-05-14 22:53:22.897929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62458 ] 00:06:10.856 [2024-05-14 22:53:23.031021] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.856 [2024-05-14 22:53:23.031089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.856 [2024-05-14 22:53:23.091914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62467 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62467 /var/tmp/spdk2.sock 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62467 ']' 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.114 22:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.114 [2024-05-14 22:53:23.316538] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:11.115 [2024-05-14 22:53:23.316641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62467 ] 00:06:11.115 [2024-05-14 22:53:23.463555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.373 [2024-05-14 22:53:23.581010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.939 22:53:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.940 22:53:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:11.940 22:53:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62467 00:06:11.940 22:53:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62467 00:06:11.940 22:53:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62458 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62458 ']' 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 62458 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62458 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.874 killing process with pid 62458 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62458' 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 62458 00:06:12.874 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 62458 00:06:13.441 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62467 00:06:13.441 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62467 ']' 00:06:13.441 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 62467 00:06:13.441 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:13.441 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.441 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62467 00:06:13.699 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.699 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.699 killing process with pid 62467 00:06:13.699 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62467' 00:06:13.699 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 62467 00:06:13.699 22:53:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 62467 00:06:13.958 00:06:13.958 real 0m3.278s 00:06:13.958 user 0m3.789s 00:06:13.958 sys 0m0.926s 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.958 ************************************ 00:06:13.958 END TEST locking_app_on_unlocked_coremask 00:06:13.958 ************************************ 00:06:13.958 22:53:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:13.958 22:53:26 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.958 22:53:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.958 22:53:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.958 ************************************ 00:06:13.958 START TEST locking_app_on_locked_coremask 00:06:13.958 ************************************ 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62546 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62546 /var/tmp/spdk.sock 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62546 ']' 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.958 22:53:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.958 [2024-05-14 22:53:26.234408] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:13.958 [2024-05-14 22:53:26.234505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62546 ] 00:06:14.216 [2024-05-14 22:53:26.373191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.216 [2024-05-14 22:53:26.433724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62574 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62574 /var/tmp/spdk2.sock 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62574 /var/tmp/spdk2.sock 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.210 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62574 /var/tmp/spdk2.sock 00:06:15.211 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62574 ']' 00:06:15.211 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.211 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.211 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.211 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.211 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.211 [2024-05-14 22:53:27.283036] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:15.211 [2024-05-14 22:53:27.283118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62574 ] 00:06:15.211 [2024-05-14 22:53:27.423322] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62546 has claimed it. 00:06:15.211 [2024-05-14 22:53:27.423402] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.778 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (62574) - No such process 00:06:15.778 ERROR: process (pid: 62574) is no longer running 00:06:15.778 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.778 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:15.778 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:15.778 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.778 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:15.778 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.778 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62546 00:06:15.778 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.778 22:53:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62546 00:06:16.037 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62546 00:06:16.037 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62546 ']' 00:06:16.037 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62546 00:06:16.037 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:16.037 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:16.037 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62546 00:06:16.296 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:16.296 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:16.296 killing process with pid 62546 00:06:16.296 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62546' 00:06:16.296 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62546 00:06:16.296 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62546 00:06:16.555 00:06:16.555 real 0m2.555s 00:06:16.555 user 0m3.040s 00:06:16.555 sys 0m0.547s 00:06:16.555 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.555 22:53:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.555 ************************************ 00:06:16.555 END TEST locking_app_on_locked_coremask 00:06:16.555 ************************************ 00:06:16.555 22:53:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:16.555 22:53:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:16.555 22:53:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.555 22:53:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.555 ************************************ 00:06:16.555 START TEST locking_overlapped_coremask 00:06:16.555 ************************************ 00:06:16.555 22:53:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:16.555 22:53:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62630 00:06:16.555 22:53:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:16.555 22:53:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62630 /var/tmp/spdk.sock 00:06:16.555 22:53:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 62630 ']' 00:06:16.555 22:53:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.555 22:53:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:16.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.555 22:53:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.555 22:53:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:16.555 22:53:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.555 [2024-05-14 22:53:28.875598] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:16.555 [2024-05-14 22:53:28.875743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62630 ] 00:06:16.814 [2024-05-14 22:53:29.023105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.814 [2024-05-14 22:53:29.081996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.814 [2024-05-14 22:53:29.082140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.814 [2024-05-14 22:53:29.082144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62656 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62656 /var/tmp/spdk2.sock 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62656 /var/tmp/spdk2.sock 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62656 /var/tmp/spdk2.sock 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 62656 ']' 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.382 22:53:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.642 [2024-05-14 22:53:29.804231] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:17.642 [2024-05-14 22:53:29.804313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62656 ] 00:06:17.642 [2024-05-14 22:53:29.946915] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62630 has claimed it. 00:06:17.642 [2024-05-14 22:53:29.946999] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.209 ERROR: process (pid: 62656) is no longer running 00:06:18.209 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (62656) - No such process 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62630 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 62630 ']' 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 62630 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62630 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.209 killing process with pid 62630 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62630' 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 62630 00:06:18.209 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 62630 00:06:18.468 00:06:18.468 real 0m2.033s 00:06:18.468 user 0m5.594s 00:06:18.468 sys 0m0.320s 00:06:18.468 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.468 22:53:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.468 ************************************ 00:06:18.468 END TEST locking_overlapped_coremask 00:06:18.468 ************************************ 00:06:18.468 22:53:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:18.468 22:53:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.468 22:53:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.468 22:53:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.757 ************************************ 00:06:18.757 START TEST locking_overlapped_coremask_via_rpc 00:06:18.757 ************************************ 00:06:18.757 22:53:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:18.757 22:53:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62706 00:06:18.757 22:53:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:18.757 22:53:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62706 /var/tmp/spdk.sock 00:06:18.757 22:53:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62706 ']' 00:06:18.757 22:53:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.757 22:53:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.757 22:53:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.757 22:53:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.757 22:53:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.757 [2024-05-14 22:53:30.924551] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:18.757 [2024-05-14 22:53:30.924663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62706 ] 00:06:18.757 [2024-05-14 22:53:31.062550] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.757 [2024-05-14 22:53:31.062611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.757 [2024-05-14 22:53:31.122504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.757 [2024-05-14 22:53:31.122627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.757 [2024-05-14 22:53:31.122632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62718 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62718 /var/tmp/spdk2.sock 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62718 ']' 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.016 22:53:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.016 [2024-05-14 22:53:31.346011] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:19.016 [2024-05-14 22:53:31.346106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62718 ] 00:06:19.274 [2024-05-14 22:53:31.492036] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.274 [2024-05-14 22:53:31.492085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.274 [2024-05-14 22:53:31.612391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.274 [2024-05-14 22:53:31.612510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.274 [2024-05-14 22:53:31.612511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.209 [2024-05-14 22:53:32.376892] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62706 has claimed it. 00:06:20.209 2024/05/14 22:53:32 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:20.209 request: 00:06:20.209 { 00:06:20.209 "method": "framework_enable_cpumask_locks", 00:06:20.209 "params": {} 00:06:20.209 } 00:06:20.209 Got JSON-RPC error response 00:06:20.209 GoRPCClient: error on JSON-RPC call 00:06:20.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62706 /var/tmp/spdk.sock 00:06:20.209 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62706 ']' 00:06:20.210 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.210 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.210 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.210 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.210 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.468 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.468 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:20.468 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62718 /var/tmp/spdk2.sock 00:06:20.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.468 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62718 ']' 00:06:20.468 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.468 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.468 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.468 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.468 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.726 ************************************ 00:06:20.726 END TEST locking_overlapped_coremask_via_rpc 00:06:20.726 ************************************ 00:06:20.726 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.726 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:20.727 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:20.727 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.727 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.727 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.727 00:06:20.727 real 0m2.107s 00:06:20.727 user 0m1.248s 00:06:20.727 sys 0m0.176s 00:06:20.727 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.727 22:53:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.727 22:53:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:20.727 22:53:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62706 ]] 00:06:20.727 22:53:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62706 00:06:20.727 22:53:32 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62706 ']' 00:06:20.727 22:53:32 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62706 00:06:20.727 22:53:32 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:20.727 22:53:33 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.727 22:53:33 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62706 00:06:20.727 killing process with pid 62706 00:06:20.727 22:53:33 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.727 22:53:33 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.727 22:53:33 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62706' 00:06:20.727 22:53:33 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 62706 00:06:20.727 22:53:33 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 62706 00:06:20.985 22:53:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62718 ]] 00:06:20.985 22:53:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62718 00:06:20.985 22:53:33 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62718 ']' 00:06:20.985 22:53:33 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62718 00:06:20.985 22:53:33 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:20.985 22:53:33 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.985 22:53:33 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62718 00:06:20.985 killing process with pid 62718 00:06:20.985 22:53:33 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:20.985 22:53:33 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:20.985 22:53:33 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62718' 00:06:20.985 22:53:33 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 62718 00:06:20.985 22:53:33 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 62718 00:06:21.248 22:53:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.248 22:53:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:21.248 22:53:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62706 ]] 00:06:21.248 22:53:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62706 00:06:21.248 22:53:33 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62706 ']' 00:06:21.248 22:53:33 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62706 00:06:21.248 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (62706) - No such process 00:06:21.248 Process with pid 62706 is not found 00:06:21.248 22:53:33 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 62706 is not found' 00:06:21.248 22:53:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62718 ]] 00:06:21.248 22:53:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62718 00:06:21.248 22:53:33 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 62718 ']' 00:06:21.248 22:53:33 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 62718 00:06:21.248 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (62718) - No such process 00:06:21.248 Process with pid 62718 is not found 00:06:21.248 22:53:33 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 62718 is not found' 00:06:21.248 22:53:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.248 00:06:21.248 real 0m17.243s 00:06:21.248 user 0m31.161s 00:06:21.248 sys 0m4.382s 00:06:21.248 22:53:33 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.248 22:53:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.248 ************************************ 00:06:21.248 END TEST cpu_locks 00:06:21.248 ************************************ 00:06:21.507 00:06:21.507 real 0m43.246s 00:06:21.507 user 1m25.741s 00:06:21.507 sys 0m7.811s 00:06:21.507 22:53:33 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.507 22:53:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.507 ************************************ 00:06:21.507 END TEST event 00:06:21.507 ************************************ 00:06:21.507 22:53:33 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:21.507 22:53:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.507 22:53:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.507 22:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:21.507 ************************************ 00:06:21.507 START TEST thread 00:06:21.507 ************************************ 00:06:21.507 22:53:33 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:21.507 * Looking for test storage... 00:06:21.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:21.507 22:53:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.507 22:53:33 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:21.507 22:53:33 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.508 22:53:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.508 ************************************ 00:06:21.508 START TEST thread_poller_perf 00:06:21.508 ************************************ 00:06:21.508 22:53:33 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.508 [2024-05-14 22:53:33.810917] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:21.508 [2024-05-14 22:53:33.811012] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62865 ] 00:06:21.765 [2024-05-14 22:53:33.942858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.765 [2024-05-14 22:53:34.030753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.765 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:23.139 ====================================== 00:06:23.139 busy:2217403190 (cyc) 00:06:23.139 total_run_count: 300000 00:06:23.139 tsc_hz: 2200000000 (cyc) 00:06:23.139 ====================================== 00:06:23.139 poller_cost: 7391 (cyc), 3359 (nsec) 00:06:23.139 00:06:23.139 real 0m1.348s 00:06:23.139 user 0m1.195s 00:06:23.139 sys 0m0.046s 00:06:23.139 22:53:35 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.139 22:53:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.139 ************************************ 00:06:23.139 END TEST thread_poller_perf 00:06:23.139 ************************************ 00:06:23.139 22:53:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.139 22:53:35 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:23.139 22:53:35 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.139 22:53:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.139 ************************************ 00:06:23.139 START TEST thread_poller_perf 00:06:23.139 ************************************ 00:06:23.139 22:53:35 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.139 [2024-05-14 22:53:35.213250] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:23.139 [2024-05-14 22:53:35.213381] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62900 ] 00:06:23.139 [2024-05-14 22:53:35.346190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.139 [2024-05-14 22:53:35.406078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.139 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.516 ====================================== 00:06:24.516 busy:2202373673 (cyc) 00:06:24.516 total_run_count: 4030000 00:06:24.516 tsc_hz: 2200000000 (cyc) 00:06:24.516 ====================================== 00:06:24.516 poller_cost: 546 (cyc), 248 (nsec) 00:06:24.516 00:06:24.516 real 0m1.316s 00:06:24.516 user 0m1.172s 00:06:24.516 sys 0m0.036s 00:06:24.516 22:53:36 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.516 22:53:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.516 ************************************ 00:06:24.516 END TEST thread_poller_perf 00:06:24.516 ************************************ 00:06:24.516 22:53:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:24.516 00:06:24.516 real 0m2.847s 00:06:24.516 user 0m2.424s 00:06:24.516 sys 0m0.202s 00:06:24.516 22:53:36 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.516 22:53:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.516 ************************************ 00:06:24.516 END TEST thread 00:06:24.516 ************************************ 00:06:24.516 22:53:36 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:24.516 22:53:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.516 22:53:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.516 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:06:24.516 ************************************ 00:06:24.516 START TEST accel 00:06:24.516 ************************************ 00:06:24.516 22:53:36 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:24.516 * Looking for test storage... 00:06:24.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:24.516 22:53:36 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:24.516 22:53:36 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:24.516 22:53:36 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:24.516 22:53:36 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=62969 00:06:24.516 22:53:36 accel -- accel/accel.sh@63 -- # waitforlisten 62969 00:06:24.516 22:53:36 accel -- common/autotest_common.sh@827 -- # '[' -z 62969 ']' 00:06:24.516 22:53:36 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.516 22:53:36 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.516 22:53:36 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:24.516 22:53:36 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.516 22:53:36 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:24.516 22:53:36 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.516 22:53:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.516 22:53:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.516 22:53:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.516 22:53:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.516 22:53:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.516 22:53:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.516 22:53:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:24.516 22:53:36 accel -- accel/accel.sh@41 -- # jq -r . 00:06:24.516 [2024-05-14 22:53:36.744057] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:24.516 [2024-05-14 22:53:36.744158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62969 ] 00:06:24.516 [2024-05-14 22:53:36.882295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.774 [2024-05-14 22:53:36.942123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.774 22:53:37 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.774 22:53:37 accel -- common/autotest_common.sh@860 -- # return 0 00:06:24.774 22:53:37 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:24.774 22:53:37 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:24.774 22:53:37 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:24.774 22:53:37 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:24.774 22:53:37 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:24.774 22:53:37 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:24.774 22:53:37 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.774 22:53:37 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:24.774 22:53:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.774 22:53:37 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.032 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.032 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.032 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.032 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.032 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.032 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.032 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.032 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.032 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.032 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.032 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.032 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.032 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.033 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.033 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.033 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.033 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.033 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.033 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.033 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.033 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.033 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.033 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.033 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.033 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.033 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.033 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.033 22:53:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # IFS== 00:06:25.033 22:53:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:25.033 22:53:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:25.033 22:53:37 accel -- accel/accel.sh@75 -- # killprocess 62969 00:06:25.033 22:53:37 accel -- common/autotest_common.sh@946 -- # '[' -z 62969 ']' 00:06:25.033 22:53:37 accel -- common/autotest_common.sh@950 -- # kill -0 62969 00:06:25.033 22:53:37 accel -- common/autotest_common.sh@951 -- # uname 00:06:25.033 22:53:37 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.033 22:53:37 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62969 00:06:25.033 22:53:37 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.033 22:53:37 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.033 killing process with pid 62969 00:06:25.033 22:53:37 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62969' 00:06:25.033 22:53:37 accel -- common/autotest_common.sh@965 -- # kill 62969 00:06:25.033 22:53:37 accel -- common/autotest_common.sh@970 -- # wait 62969 00:06:25.384 22:53:37 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:25.384 22:53:37 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:25.384 22:53:37 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:25.384 22:53:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.384 22:53:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.384 22:53:37 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:25.384 22:53:37 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:25.384 22:53:37 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:25.384 22:53:37 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.384 22:53:37 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.384 22:53:37 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.384 22:53:37 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.384 22:53:37 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.384 22:53:37 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:25.384 22:53:37 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:25.384 22:53:37 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.384 22:53:37 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:25.384 22:53:37 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:25.384 22:53:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:25.384 22:53:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.384 22:53:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.384 ************************************ 00:06:25.384 START TEST accel_missing_filename 00:06:25.384 ************************************ 00:06:25.384 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:25.384 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:25.384 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:25.384 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:25.384 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.384 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:25.384 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.384 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:25.384 22:53:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:25.384 22:53:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:25.384 22:53:37 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.384 22:53:37 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.384 22:53:37 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.384 22:53:37 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.384 22:53:37 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.384 22:53:37 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:25.384 22:53:37 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:25.384 [2024-05-14 22:53:37.583852] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:25.384 [2024-05-14 22:53:37.583999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63025 ] 00:06:25.384 [2024-05-14 22:53:37.721408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.645 [2024-05-14 22:53:37.796000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.645 [2024-05-14 22:53:37.829576] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.645 [2024-05-14 22:53:37.872783] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:25.645 A filename is required. 00:06:25.645 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:25.645 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.645 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:25.645 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:25.645 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:25.645 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.645 00:06:25.645 real 0m0.428s 00:06:25.645 user 0m0.276s 00:06:25.645 sys 0m0.088s 00:06:25.645 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.645 22:53:37 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:25.645 ************************************ 00:06:25.645 END TEST accel_missing_filename 00:06:25.645 ************************************ 00:06:25.645 22:53:38 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.645 22:53:38 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:25.645 22:53:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.645 22:53:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.645 ************************************ 00:06:25.645 START TEST accel_compress_verify 00:06:25.645 ************************************ 00:06:25.645 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.645 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:25.645 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.645 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:25.645 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.645 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:25.645 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:25.645 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.645 22:53:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.645 22:53:38 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:25.645 22:53:38 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.645 22:53:38 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.645 22:53:38 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.904 22:53:38 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.904 22:53:38 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.904 22:53:38 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:25.904 22:53:38 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:25.904 [2024-05-14 22:53:38.053633] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:25.904 [2024-05-14 22:53:38.053799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63050 ] 00:06:25.904 [2024-05-14 22:53:38.194386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.904 [2024-05-14 22:53:38.264790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.163 [2024-05-14 22:53:38.298163] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.163 [2024-05-14 22:53:38.340794] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:26.163 00:06:26.163 Compression does not support the verify option, aborting. 00:06:26.163 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:26.163 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.163 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:26.163 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:26.163 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:26.163 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.163 00:06:26.163 real 0m0.421s 00:06:26.163 user 0m0.293s 00:06:26.163 sys 0m0.076s 00:06:26.163 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.163 ************************************ 00:06:26.163 22:53:38 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.163 END TEST accel_compress_verify 00:06:26.163 ************************************ 00:06:26.163 22:53:38 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:26.163 22:53:38 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:26.163 22:53:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.163 22:53:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.163 ************************************ 00:06:26.163 START TEST accel_wrong_workload 00:06:26.163 ************************************ 00:06:26.163 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:26.163 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:26.163 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:26.163 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:26.163 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.163 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:26.163 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.163 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:26.163 22:53:38 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:26.163 22:53:38 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:26.163 22:53:38 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.163 22:53:38 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.163 22:53:38 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.163 22:53:38 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.163 22:53:38 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.164 22:53:38 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:26.164 22:53:38 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:26.164 Unsupported workload type: foobar 00:06:26.164 [2024-05-14 22:53:38.518544] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:26.164 accel_perf options: 00:06:26.164 [-h help message] 00:06:26.164 [-q queue depth per core] 00:06:26.164 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:26.164 [-T number of threads per core 00:06:26.164 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:26.164 [-t time in seconds] 00:06:26.164 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:26.164 [ dif_verify, , dif_generate, dif_generate_copy 00:06:26.164 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:26.164 [-l for compress/decompress workloads, name of uncompressed input file 00:06:26.164 [-S for crc32c workload, use this seed value (default 0) 00:06:26.164 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:26.164 [-f for fill workload, use this BYTE value (default 255) 00:06:26.164 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:26.164 [-y verify result if this switch is on] 00:06:26.164 [-a tasks to allocate per core (default: same value as -q)] 00:06:26.164 Can be used to spread operations across a wider range of memory. 00:06:26.164 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:26.164 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.164 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.164 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.164 00:06:26.164 real 0m0.028s 00:06:26.164 user 0m0.014s 00:06:26.164 sys 0m0.014s 00:06:26.164 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.164 22:53:38 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:26.164 ************************************ 00:06:26.164 END TEST accel_wrong_workload 00:06:26.164 ************************************ 00:06:26.423 22:53:38 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:26.423 22:53:38 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:26.423 22:53:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.423 22:53:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.423 ************************************ 00:06:26.423 START TEST accel_negative_buffers 00:06:26.423 ************************************ 00:06:26.423 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:26.423 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:26.423 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:26.423 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:26.423 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.423 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:26.423 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.423 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:26.423 22:53:38 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:26.423 22:53:38 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:26.423 22:53:38 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.423 22:53:38 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.423 22:53:38 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.423 22:53:38 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.423 22:53:38 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.423 22:53:38 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:26.423 22:53:38 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:26.423 -x option must be non-negative. 00:06:26.423 [2024-05-14 22:53:38.597596] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:26.423 accel_perf options: 00:06:26.423 [-h help message] 00:06:26.423 [-q queue depth per core] 00:06:26.423 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:26.423 [-T number of threads per core 00:06:26.423 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:26.423 [-t time in seconds] 00:06:26.423 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:26.423 [ dif_verify, , dif_generate, dif_generate_copy 00:06:26.423 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:26.423 [-l for compress/decompress workloads, name of uncompressed input file 00:06:26.423 [-S for crc32c workload, use this seed value (default 0) 00:06:26.424 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:26.424 [-f for fill workload, use this BYTE value (default 255) 00:06:26.424 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:26.424 [-y verify result if this switch is on] 00:06:26.424 [-a tasks to allocate per core (default: same value as -q)] 00:06:26.424 Can be used to spread operations across a wider range of memory. 00:06:26.424 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:26.424 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.424 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.424 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.424 00:06:26.424 real 0m0.039s 00:06:26.424 user 0m0.023s 00:06:26.424 sys 0m0.014s 00:06:26.424 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.424 22:53:38 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:26.424 ************************************ 00:06:26.424 END TEST accel_negative_buffers 00:06:26.424 ************************************ 00:06:26.424 22:53:38 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:26.424 22:53:38 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:26.424 22:53:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.424 22:53:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.424 ************************************ 00:06:26.424 START TEST accel_crc32c 00:06:26.424 ************************************ 00:06:26.424 22:53:38 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:26.424 22:53:38 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:26.424 [2024-05-14 22:53:38.669556] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:26.424 [2024-05-14 22:53:38.669635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63108 ] 00:06:26.424 [2024-05-14 22:53:38.806132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.684 [2024-05-14 22:53:38.876730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:26.684 22:53:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:28.062 22:53:40 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.062 00:06:28.062 real 0m1.400s 00:06:28.062 user 0m1.229s 00:06:28.062 sys 0m0.077s 00:06:28.062 22:53:40 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.062 22:53:40 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:28.062 ************************************ 00:06:28.062 END TEST accel_crc32c 00:06:28.062 ************************************ 00:06:28.062 22:53:40 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:28.062 22:53:40 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:28.062 22:53:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.062 22:53:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.062 ************************************ 00:06:28.062 START TEST accel_crc32c_C2 00:06:28.062 ************************************ 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:28.062 [2024-05-14 22:53:40.118425] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:28.062 [2024-05-14 22:53:40.118508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63143 ] 00:06:28.062 [2024-05-14 22:53:40.255399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.062 [2024-05-14 22:53:40.325455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.062 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:28.063 22:53:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.444 00:06:29.444 real 0m1.401s 00:06:29.444 user 0m1.234s 00:06:29.444 sys 0m0.073s 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.444 22:53:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:29.444 ************************************ 00:06:29.444 END TEST accel_crc32c_C2 00:06:29.444 ************************************ 00:06:29.444 22:53:41 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:29.444 22:53:41 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:29.444 22:53:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.444 22:53:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.444 ************************************ 00:06:29.444 START TEST accel_copy 00:06:29.444 ************************************ 00:06:29.444 22:53:41 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:29.444 [2024-05-14 22:53:41.564944] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:29.444 [2024-05-14 22:53:41.565751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63177 ] 00:06:29.444 [2024-05-14 22:53:41.701562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.444 [2024-05-14 22:53:41.760756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.444 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.445 22:53:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.818 22:53:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.819 ************************************ 00:06:30.819 END TEST accel_copy 00:06:30.819 ************************************ 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:30.819 22:53:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.819 00:06:30.819 real 0m1.395s 00:06:30.819 user 0m1.231s 00:06:30.819 sys 0m0.067s 00:06:30.819 22:53:42 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.819 22:53:42 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:30.819 22:53:42 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.819 22:53:42 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:30.819 22:53:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.819 22:53:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.819 ************************************ 00:06:30.819 START TEST accel_fill 00:06:30.819 ************************************ 00:06:30.819 22:53:42 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:30.819 22:53:42 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:30.819 [2024-05-14 22:53:43.012691] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:30.819 [2024-05-14 22:53:43.012832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63212 ] 00:06:30.819 [2024-05-14 22:53:43.150725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.077 [2024-05-14 22:53:43.212423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:31.077 22:53:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:32.455 22:53:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.455 00:06:32.455 real 0m1.451s 00:06:32.455 user 0m1.277s 00:06:32.455 sys 0m0.077s 00:06:32.455 22:53:44 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.455 22:53:44 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:32.455 ************************************ 00:06:32.455 END TEST accel_fill 00:06:32.455 ************************************ 00:06:32.455 22:53:44 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:32.455 22:53:44 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:32.455 22:53:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.455 22:53:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.455 ************************************ 00:06:32.455 START TEST accel_copy_crc32c 00:06:32.455 ************************************ 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:32.455 [2024-05-14 22:53:44.505628] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:32.455 [2024-05-14 22:53:44.505729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63248 ] 00:06:32.455 [2024-05-14 22:53:44.644331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.455 [2024-05-14 22:53:44.713058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.455 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.456 22:53:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.827 00:06:33.827 real 0m1.407s 00:06:33.827 user 0m1.237s 00:06:33.827 sys 0m0.079s 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.827 ************************************ 00:06:33.827 END TEST accel_copy_crc32c 00:06:33.827 ************************************ 00:06:33.827 22:53:45 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:33.827 22:53:45 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:33.827 22:53:45 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:33.827 22:53:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.827 22:53:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.827 ************************************ 00:06:33.827 START TEST accel_copy_crc32c_C2 00:06:33.827 ************************************ 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.827 22:53:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:33.827 [2024-05-14 22:53:45.961812] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:33.827 [2024-05-14 22:53:45.961890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63277 ] 00:06:33.827 [2024-05-14 22:53:46.095852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.827 [2024-05-14 22:53:46.167489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.827 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.827 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.827 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.827 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.828 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.086 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.086 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.086 22:53:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.018 ************************************ 00:06:35.018 END TEST accel_copy_crc32c_C2 00:06:35.018 ************************************ 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.018 00:06:35.018 real 0m1.411s 00:06:35.018 user 0m1.245s 00:06:35.018 sys 0m0.070s 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.018 22:53:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:35.018 22:53:47 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:35.018 22:53:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:35.018 22:53:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.018 22:53:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.018 ************************************ 00:06:35.018 START TEST accel_dualcast 00:06:35.018 ************************************ 00:06:35.018 22:53:47 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:35.018 22:53:47 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:35.279 [2024-05-14 22:53:47.416755] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:35.279 [2024-05-14 22:53:47.416854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63312 ] 00:06:35.279 [2024-05-14 22:53:47.554751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.279 [2024-05-14 22:53:47.626657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.279 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:35.537 22:53:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:35.538 22:53:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:35.538 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:35.538 22:53:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.470 ************************************ 00:06:36.470 END TEST accel_dualcast 00:06:36.470 ************************************ 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:36.470 22:53:48 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.470 00:06:36.470 real 0m1.410s 00:06:36.470 user 0m0.011s 00:06:36.470 sys 0m0.005s 00:06:36.470 22:53:48 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.470 22:53:48 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:36.470 22:53:48 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:36.470 22:53:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:36.470 22:53:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.470 22:53:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.470 ************************************ 00:06:36.470 START TEST accel_compare 00:06:36.470 ************************************ 00:06:36.470 22:53:48 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:36.470 22:53:48 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:36.729 [2024-05-14 22:53:48.870057] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:36.729 [2024-05-14 22:53:48.870744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63346 ] 00:06:36.729 [2024-05-14 22:53:49.006311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.729 [2024-05-14 22:53:49.077454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.729 22:53:49 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.987 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.988 22:53:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.922 ************************************ 00:06:37.922 END TEST accel_compare 00:06:37.922 ************************************ 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:37.922 22:53:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.922 00:06:37.922 real 0m1.406s 00:06:37.922 user 0m1.228s 00:06:37.922 sys 0m0.081s 00:06:37.922 22:53:50 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.922 22:53:50 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:37.922 22:53:50 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:37.922 22:53:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:37.922 22:53:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.922 22:53:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.922 ************************************ 00:06:37.922 START TEST accel_xor 00:06:37.922 ************************************ 00:06:37.922 22:53:50 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:37.922 22:53:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:38.181 [2024-05-14 22:53:50.326196] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:38.181 [2024-05-14 22:53:50.326286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63381 ] 00:06:38.181 [2024-05-14 22:53:50.463302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.181 [2024-05-14 22:53:50.522938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:38.181 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.182 22:53:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.560 22:53:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.560 22:53:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.561 00:06:39.561 real 0m1.395s 00:06:39.561 user 0m1.221s 00:06:39.561 sys 0m0.081s 00:06:39.561 22:53:51 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.561 22:53:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:39.561 ************************************ 00:06:39.561 END TEST accel_xor 00:06:39.561 ************************************ 00:06:39.561 22:53:51 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:39.561 22:53:51 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:39.561 22:53:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.561 22:53:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.561 ************************************ 00:06:39.561 START TEST accel_xor 00:06:39.561 ************************************ 00:06:39.561 22:53:51 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:39.561 22:53:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:39.561 [2024-05-14 22:53:51.769676] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:39.561 [2024-05-14 22:53:51.769801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63414 ] 00:06:39.561 [2024-05-14 22:53:51.907181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.836 [2024-05-14 22:53:51.968095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.836 22:53:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.836 22:53:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.836 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.836 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.836 22:53:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.836 22:53:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.836 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.836 22:53:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.836 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.837 22:53:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:40.779 22:53:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.779 00:06:40.779 real 0m1.399s 00:06:40.779 user 0m1.230s 00:06:40.779 sys 0m0.074s 00:06:40.779 22:53:53 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.779 22:53:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:40.779 ************************************ 00:06:40.779 END TEST accel_xor 00:06:40.779 ************************************ 00:06:41.037 22:53:53 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:41.037 22:53:53 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:41.037 22:53:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.037 22:53:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.037 ************************************ 00:06:41.037 START TEST accel_dif_verify 00:06:41.037 ************************************ 00:06:41.037 22:53:53 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:41.037 22:53:53 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:41.037 22:53:53 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:41.037 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.037 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.037 22:53:53 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:41.037 22:53:53 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:41.037 22:53:53 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:41.037 22:53:53 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.037 22:53:53 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.038 22:53:53 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.038 22:53:53 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.038 22:53:53 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.038 22:53:53 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:41.038 22:53:53 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:41.038 [2024-05-14 22:53:53.213407] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:41.038 [2024-05-14 22:53:53.213507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63450 ] 00:06:41.038 [2024-05-14 22:53:53.351052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.038 [2024-05-14 22:53:53.426820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:41.296 22:53:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:42.232 22:53:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.232 00:06:42.232 real 0m1.414s 00:06:42.232 user 0m0.014s 00:06:42.232 sys 0m0.001s 00:06:42.232 22:53:54 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.232 ************************************ 00:06:42.232 END TEST accel_dif_verify 00:06:42.232 ************************************ 00:06:42.232 22:53:54 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:42.493 22:53:54 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:42.493 22:53:54 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:42.493 22:53:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.493 22:53:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.493 ************************************ 00:06:42.493 START TEST accel_dif_generate 00:06:42.493 ************************************ 00:06:42.493 22:53:54 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:42.493 22:53:54 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:42.493 [2024-05-14 22:53:54.669540] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:42.493 [2024-05-14 22:53:54.669653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63479 ] 00:06:42.493 [2024-05-14 22:53:54.809190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.493 [2024-05-14 22:53:54.882102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:42.752 22:53:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:43.687 22:53:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.687 00:06:43.687 real 0m1.418s 00:06:43.687 user 0m1.242s 00:06:43.687 sys 0m0.080s 00:06:43.687 22:53:56 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.687 ************************************ 00:06:43.687 END TEST accel_dif_generate 00:06:43.687 22:53:56 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:43.687 ************************************ 00:06:43.946 22:53:56 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:43.946 22:53:56 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:43.946 22:53:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.946 22:53:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.946 ************************************ 00:06:43.946 START TEST accel_dif_generate_copy 00:06:43.946 ************************************ 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:43.946 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:43.946 [2024-05-14 22:53:56.133255] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:43.946 [2024-05-14 22:53:56.133347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63513 ] 00:06:43.946 [2024-05-14 22:53:56.269027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.205 [2024-05-14 22:53:56.337442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.205 22:53:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.141 00:06:45.141 real 0m1.401s 00:06:45.141 user 0m0.012s 00:06:45.141 sys 0m0.002s 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.141 22:53:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:45.141 ************************************ 00:06:45.141 END TEST accel_dif_generate_copy 00:06:45.141 ************************************ 00:06:45.398 22:53:57 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:45.398 22:53:57 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.398 22:53:57 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:45.398 22:53:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.398 22:53:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.398 ************************************ 00:06:45.398 START TEST accel_comp 00:06:45.398 ************************************ 00:06:45.398 22:53:57 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:45.398 22:53:57 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:45.398 [2024-05-14 22:53:57.581683] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:45.398 [2024-05-14 22:53:57.581832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63548 ] 00:06:45.398 [2024-05-14 22:53:57.718093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.656 [2024-05-14 22:53:57.790407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:45.656 22:53:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:46.590 22:53:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.590 00:06:46.591 real 0m1.419s 00:06:46.591 user 0m1.247s 00:06:46.591 sys 0m0.066s 00:06:46.591 22:53:58 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.591 22:53:58 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:46.591 ************************************ 00:06:46.591 END TEST accel_comp 00:06:46.591 ************************************ 00:06:46.850 22:53:59 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:46.850 22:53:59 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:46.850 22:53:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.850 22:53:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.850 ************************************ 00:06:46.850 START TEST accel_decomp 00:06:46.850 ************************************ 00:06:46.850 22:53:59 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:46.850 22:53:59 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:46.850 [2024-05-14 22:53:59.048645] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:46.850 [2024-05-14 22:53:59.048750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63577 ] 00:06:46.850 [2024-05-14 22:53:59.182941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.850 [2024-05-14 22:53:59.239692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.110 22:53:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:48.046 22:54:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.046 00:06:48.046 real 0m1.395s 00:06:48.046 user 0m1.225s 00:06:48.046 sys 0m0.074s 00:06:48.046 22:54:00 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.046 22:54:00 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:48.046 ************************************ 00:06:48.046 END TEST accel_decomp 00:06:48.046 ************************************ 00:06:48.305 22:54:00 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:48.305 22:54:00 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:48.305 22:54:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.305 22:54:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.305 ************************************ 00:06:48.305 START TEST accel_decmop_full 00:06:48.305 ************************************ 00:06:48.305 22:54:00 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:48.305 22:54:00 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:48.305 [2024-05-14 22:54:00.492129] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:48.305 [2024-05-14 22:54:00.492234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63617 ] 00:06:48.305 [2024-05-14 22:54:00.625615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.305 [2024-05-14 22:54:00.684869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:48.565 22:54:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:49.504 22:54:01 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.504 ************************************ 00:06:49.504 END TEST accel_decmop_full 00:06:49.504 ************************************ 00:06:49.504 00:06:49.504 real 0m1.401s 00:06:49.504 user 0m1.222s 00:06:49.504 sys 0m0.085s 00:06:49.504 22:54:01 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.504 22:54:01 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:49.764 22:54:01 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:49.764 22:54:01 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:49.764 22:54:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.764 22:54:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.764 ************************************ 00:06:49.764 START TEST accel_decomp_mcore 00:06:49.764 ************************************ 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:49.764 22:54:01 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:49.764 [2024-05-14 22:54:01.933601] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:49.764 [2024-05-14 22:54:01.933678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63646 ] 00:06:49.764 [2024-05-14 22:54:02.069067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.764 [2024-05-14 22:54:02.130081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.764 [2024-05-14 22:54:02.130157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.764 [2024-05-14 22:54:02.130284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.764 [2024-05-14 22:54:02.130287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.023 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.023 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.023 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.023 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.023 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.023 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.023 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.023 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.023 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.024 22:54:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.959 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.960 00:06:50.960 real 0m1.398s 00:06:50.960 user 0m0.015s 00:06:50.960 sys 0m0.002s 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.960 ************************************ 00:06:50.960 END TEST accel_decomp_mcore 00:06:50.960 22:54:03 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:50.960 ************************************ 00:06:50.960 22:54:03 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.960 22:54:03 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:50.960 22:54:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.960 22:54:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.220 ************************************ 00:06:51.220 START TEST accel_decomp_full_mcore 00:06:51.220 ************************************ 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:51.220 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:51.220 [2024-05-14 22:54:03.379572] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:51.220 [2024-05-14 22:54:03.379692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63689 ] 00:06:51.220 [2024-05-14 22:54:03.517721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.220 [2024-05-14 22:54:03.592594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.220 [2024-05-14 22:54:03.592728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.220 [2024-05-14 22:54:03.592848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.220 [2024-05-14 22:54:03.592851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.548 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.549 22:54:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.488 00:06:52.488 real 0m1.448s 00:06:52.488 user 0m0.020s 00:06:52.488 sys 0m0.003s 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.488 22:54:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:52.488 ************************************ 00:06:52.488 END TEST accel_decomp_full_mcore 00:06:52.488 ************************************ 00:06:52.488 22:54:04 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:52.488 22:54:04 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:52.488 22:54:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.488 22:54:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.488 ************************************ 00:06:52.488 START TEST accel_decomp_mthread 00:06:52.488 ************************************ 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:52.488 22:54:04 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:52.747 [2024-05-14 22:54:04.882226] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:52.747 [2024-05-14 22:54:04.882374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63721 ] 00:06:52.747 [2024-05-14 22:54:05.027438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.747 [2024-05-14 22:54:05.090205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.747 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.747 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.747 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.747 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:52.748 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.006 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.006 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.006 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.006 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.006 22:54:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.944 00:06:53.944 real 0m1.426s 00:06:53.944 user 0m1.251s 00:06:53.944 sys 0m0.082s 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.944 22:54:06 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:53.944 ************************************ 00:06:53.944 END TEST accel_decomp_mthread 00:06:53.944 ************************************ 00:06:53.944 22:54:06 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:53.944 22:54:06 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:53.944 22:54:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.944 22:54:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.944 ************************************ 00:06:53.944 START TEST accel_decomp_full_mthread 00:06:53.944 ************************************ 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:53.944 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:54.204 [2024-05-14 22:54:06.343965] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:54.204 [2024-05-14 22:54:06.344052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63750 ] 00:06:54.204 [2024-05-14 22:54:06.477562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.204 [2024-05-14 22:54:06.538502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.204 22:54:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.583 00:06:55.583 real 0m1.424s 00:06:55.583 user 0m1.257s 00:06:55.583 sys 0m0.075s 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.583 22:54:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:55.583 ************************************ 00:06:55.583 END TEST accel_decomp_full_mthread 00:06:55.583 ************************************ 00:06:55.583 22:54:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:55.583 22:54:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:55.583 22:54:07 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:55.583 22:54:07 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:55.583 22:54:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.583 22:54:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.583 22:54:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.583 22:54:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.583 22:54:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.583 22:54:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.583 22:54:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.583 22:54:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:55.583 22:54:07 accel -- accel/accel.sh@41 -- # jq -r . 00:06:55.583 ************************************ 00:06:55.583 START TEST accel_dif_functional_tests 00:06:55.583 ************************************ 00:06:55.583 22:54:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:55.583 [2024-05-14 22:54:07.844451] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:55.583 [2024-05-14 22:54:07.844544] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63791 ] 00:06:55.842 [2024-05-14 22:54:07.981256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.842 [2024-05-14 22:54:08.044181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.842 [2024-05-14 22:54:08.044313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.842 [2024-05-14 22:54:08.044319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.842 00:06:55.842 00:06:55.842 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.842 http://cunit.sourceforge.net/ 00:06:55.842 00:06:55.842 00:06:55.842 Suite: accel_dif 00:06:55.842 Test: verify: DIF generated, GUARD check ...passed 00:06:55.842 Test: verify: DIF generated, APPTAG check ...passed 00:06:55.842 Test: verify: DIF generated, REFTAG check ...passed 00:06:55.842 Test: verify: DIF not generated, GUARD check ...passed 00:06:55.842 Test: verify: DIF not generated, APPTAG check ...[2024-05-14 22:54:08.094937] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:55.842 [2024-05-14 22:54:08.095018] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:55.842 [2024-05-14 22:54:08.095055] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:55.842 passed 00:06:55.842 Test: verify: DIF not generated, REFTAG check ...passed 00:06:55.842 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:55.842 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:55.842 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:55.842 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-05-14 22:54:08.095090] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:55.842 [2024-05-14 22:54:08.095116] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:55.842 [2024-05-14 22:54:08.095149] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:55.842 [2024-05-14 22:54:08.095205] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:55.842 passed 00:06:55.842 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:55.842 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:55.842 Test: generate copy: DIF generated, GUARD check ...passed 00:06:55.842 Test: generate copy: DIF generated, APTTAG check ...[2024-05-14 22:54:08.095355] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:55.842 passed 00:06:55.842 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:55.842 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:55.842 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:55.842 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:55.842 Test: generate copy: iovecs-len validate ...passed 00:06:55.842 Test: generate copy: buffer alignment validate ...passed 00:06:55.842 00:06:55.842 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.842 suites 1 1 n/a 0 0 00:06:55.842 tests 20 20 20 0 0 00:06:55.842 asserts 204 204 204 0 n/a 00:06:55.842 00:06:55.842 Elapsed time = 0.002 seconds 00:06:55.842 [2024-05-14 22:54:08.095619] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:56.101 00:06:56.101 real 0m0.476s 00:06:56.101 user 0m0.555s 00:06:56.101 sys 0m0.098s 00:06:56.101 22:54:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.101 22:54:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:56.101 ************************************ 00:06:56.101 END TEST accel_dif_functional_tests 00:06:56.101 ************************************ 00:06:56.101 00:06:56.101 real 0m31.707s 00:06:56.101 user 0m33.808s 00:06:56.101 sys 0m2.813s 00:06:56.101 22:54:08 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.101 22:54:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.101 ************************************ 00:06:56.101 END TEST accel 00:06:56.101 ************************************ 00:06:56.101 22:54:08 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:56.101 22:54:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:56.101 22:54:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.101 22:54:08 -- common/autotest_common.sh@10 -- # set +x 00:06:56.101 ************************************ 00:06:56.101 START TEST accel_rpc 00:06:56.101 ************************************ 00:06:56.101 22:54:08 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:56.101 * Looking for test storage... 00:06:56.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:56.101 22:54:08 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:56.101 22:54:08 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63856 00:06:56.101 22:54:08 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:56.101 22:54:08 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63856 00:06:56.101 22:54:08 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 63856 ']' 00:06:56.101 22:54:08 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.101 22:54:08 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.101 22:54:08 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.101 22:54:08 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.101 22:54:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.360 [2024-05-14 22:54:08.510941] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:56.360 [2024-05-14 22:54:08.511047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63856 ] 00:06:56.360 [2024-05-14 22:54:08.655298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.360 [2024-05-14 22:54:08.726883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.618 22:54:08 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.618 22:54:08 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:56.618 22:54:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:56.618 22:54:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:56.618 22:54:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:56.618 22:54:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:56.618 22:54:08 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:56.618 22:54:08 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:56.618 22:54:08 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.618 22:54:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.618 ************************************ 00:06:56.618 START TEST accel_assign_opcode 00:06:56.618 ************************************ 00:06:56.618 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:56.618 22:54:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:56.618 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.618 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:56.618 [2024-05-14 22:54:08.783486] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:56.618 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.618 22:54:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:56.618 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.618 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:56.619 [2024-05-14 22:54:08.791468] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.619 software 00:06:56.619 00:06:56.619 real 0m0.198s 00:06:56.619 user 0m0.042s 00:06:56.619 sys 0m0.014s 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.619 22:54:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:56.619 ************************************ 00:06:56.619 END TEST accel_assign_opcode 00:06:56.619 ************************************ 00:06:56.878 22:54:09 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63856 00:06:56.878 22:54:09 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 63856 ']' 00:06:56.878 22:54:09 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 63856 00:06:56.878 22:54:09 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:56.878 22:54:09 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.878 22:54:09 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63856 00:06:56.878 22:54:09 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:56.878 22:54:09 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:56.878 killing process with pid 63856 00:06:56.878 22:54:09 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63856' 00:06:56.878 22:54:09 accel_rpc -- common/autotest_common.sh@965 -- # kill 63856 00:06:56.878 22:54:09 accel_rpc -- common/autotest_common.sh@970 -- # wait 63856 00:06:57.136 00:06:57.136 real 0m0.962s 00:06:57.136 user 0m0.976s 00:06:57.136 sys 0m0.304s 00:06:57.136 22:54:09 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.136 22:54:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.136 ************************************ 00:06:57.136 END TEST accel_rpc 00:06:57.136 ************************************ 00:06:57.136 22:54:09 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.136 22:54:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.136 22:54:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.136 22:54:09 -- common/autotest_common.sh@10 -- # set +x 00:06:57.136 ************************************ 00:06:57.136 START TEST app_cmdline 00:06:57.137 ************************************ 00:06:57.137 22:54:09 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.137 * Looking for test storage... 00:06:57.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.137 22:54:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:57.137 22:54:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63947 00:06:57.137 22:54:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63947 00:06:57.137 22:54:09 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:57.137 22:54:09 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 63947 ']' 00:06:57.137 22:54:09 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.137 22:54:09 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.137 22:54:09 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.137 22:54:09 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.137 22:54:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.137 [2024-05-14 22:54:09.509183] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:57.137 [2024-05-14 22:54:09.509287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63947 ] 00:06:57.395 [2024-05-14 22:54:09.647756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.395 [2024-05-14 22:54:09.717700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.330 22:54:10 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:58.330 22:54:10 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:58.330 22:54:10 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:58.589 { 00:06:58.589 "fields": { 00:06:58.589 "commit": "297733650", 00:06:58.589 "major": 24, 00:06:58.589 "minor": 5, 00:06:58.589 "patch": 0, 00:06:58.589 "suffix": "-pre" 00:06:58.589 }, 00:06:58.589 "version": "SPDK v24.05-pre git sha1 297733650" 00:06:58.589 } 00:06:58.589 22:54:10 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:58.589 22:54:10 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:58.589 22:54:10 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:58.589 22:54:10 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:58.589 22:54:10 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.589 22:54:10 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:58.589 22:54:10 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.589 22:54:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:58.589 22:54:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:58.589 22:54:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:58.589 22:54:10 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.848 2024/05/14 22:54:11 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:58.848 request: 00:06:58.848 { 00:06:58.848 "method": "env_dpdk_get_mem_stats", 00:06:58.848 "params": {} 00:06:58.848 } 00:06:58.848 Got JSON-RPC error response 00:06:58.848 GoRPCClient: error on JSON-RPC call 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.848 22:54:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63947 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 63947 ']' 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 63947 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63947 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:58.848 killing process with pid 63947 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63947' 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@965 -- # kill 63947 00:06:58.848 22:54:11 app_cmdline -- common/autotest_common.sh@970 -- # wait 63947 00:06:59.106 00:06:59.106 real 0m2.076s 00:06:59.106 user 0m2.765s 00:06:59.106 sys 0m0.388s 00:06:59.106 22:54:11 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.106 22:54:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.106 ************************************ 00:06:59.106 END TEST app_cmdline 00:06:59.106 ************************************ 00:06:59.106 22:54:11 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.106 22:54:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.106 22:54:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.106 22:54:11 -- common/autotest_common.sh@10 -- # set +x 00:06:59.365 ************************************ 00:06:59.365 START TEST version 00:06:59.365 ************************************ 00:06:59.365 22:54:11 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.365 * Looking for test storage... 00:06:59.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:59.365 22:54:11 version -- app/version.sh@17 -- # get_header_version major 00:06:59.365 22:54:11 version -- app/version.sh@14 -- # cut -f2 00:06:59.365 22:54:11 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.365 22:54:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.365 22:54:11 version -- app/version.sh@17 -- # major=24 00:06:59.365 22:54:11 version -- app/version.sh@18 -- # get_header_version minor 00:06:59.365 22:54:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.365 22:54:11 version -- app/version.sh@14 -- # cut -f2 00:06:59.365 22:54:11 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.365 22:54:11 version -- app/version.sh@18 -- # minor=5 00:06:59.365 22:54:11 version -- app/version.sh@19 -- # get_header_version patch 00:06:59.365 22:54:11 version -- app/version.sh@14 -- # cut -f2 00:06:59.365 22:54:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.365 22:54:11 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.365 22:54:11 version -- app/version.sh@19 -- # patch=0 00:06:59.365 22:54:11 version -- app/version.sh@20 -- # get_header_version suffix 00:06:59.365 22:54:11 version -- app/version.sh@14 -- # cut -f2 00:06:59.365 22:54:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.365 22:54:11 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.365 22:54:11 version -- app/version.sh@20 -- # suffix=-pre 00:06:59.365 22:54:11 version -- app/version.sh@22 -- # version=24.5 00:06:59.365 22:54:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:59.365 22:54:11 version -- app/version.sh@28 -- # version=24.5rc0 00:06:59.365 22:54:11 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:59.365 22:54:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:59.365 22:54:11 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:59.365 22:54:11 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:59.365 00:06:59.365 real 0m0.145s 00:06:59.365 user 0m0.083s 00:06:59.365 sys 0m0.089s 00:06:59.365 22:54:11 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.365 22:54:11 version -- common/autotest_common.sh@10 -- # set +x 00:06:59.365 ************************************ 00:06:59.365 END TEST version 00:06:59.365 ************************************ 00:06:59.365 22:54:11 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:59.365 22:54:11 -- spdk/autotest.sh@194 -- # uname -s 00:06:59.365 22:54:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:59.365 22:54:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:59.365 22:54:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:59.365 22:54:11 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:59.365 22:54:11 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:59.365 22:54:11 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:59.365 22:54:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.365 22:54:11 -- common/autotest_common.sh@10 -- # set +x 00:06:59.365 22:54:11 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:59.365 22:54:11 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:59.365 22:54:11 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:06:59.365 22:54:11 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:06:59.365 22:54:11 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:06:59.365 22:54:11 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:06:59.365 22:54:11 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:59.365 22:54:11 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:59.365 22:54:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.365 22:54:11 -- common/autotest_common.sh@10 -- # set +x 00:06:59.365 ************************************ 00:06:59.365 START TEST nvmf_tcp 00:06:59.365 ************************************ 00:06:59.365 22:54:11 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:59.625 * Looking for test storage... 00:06:59.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.625 22:54:11 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.625 22:54:11 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.625 22:54:11 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.625 22:54:11 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.625 22:54:11 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.625 22:54:11 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.625 22:54:11 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.625 22:54:11 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:59.625 22:54:11 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:59.626 22:54:11 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:59.626 22:54:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:59.626 22:54:11 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:59.626 22:54:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:59.626 22:54:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.626 22:54:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.626 ************************************ 00:06:59.626 START TEST nvmf_example 00:06:59.626 ************************************ 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:59.626 * Looking for test storage... 00:06:59.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:59.626 Cannot find device "nvmf_init_br" 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:59.626 Cannot find device "nvmf_tgt_br" 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:59.626 Cannot find device "nvmf_tgt_br2" 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:59.626 Cannot find device "nvmf_init_br" 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:06:59.626 22:54:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:59.626 Cannot find device "nvmf_tgt_br" 00:06:59.626 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:59.626 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:59.885 Cannot find device "nvmf_tgt_br2" 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:59.885 Cannot find device "nvmf_br" 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:59.885 Cannot find device "nvmf_init_if" 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:59.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:59.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:59.885 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:00.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:07:00.144 00:07:00.144 --- 10.0.0.2 ping statistics --- 00:07:00.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.144 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:00.144 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:00.144 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:07:00.144 00:07:00.144 --- 10.0.0.3 ping statistics --- 00:07:00.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.144 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:00.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:00.144 00:07:00.144 --- 10.0.0.1 ping statistics --- 00:07:00.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.144 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64303 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64303 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 64303 ']' 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:00.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:00.144 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:00.403 22:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:12.612 Initializing NVMe Controllers 00:07:12.612 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:12.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:12.612 Initialization complete. Launching workers. 00:07:12.612 ======================================================== 00:07:12.612 Latency(us) 00:07:12.612 Device Information : IOPS MiB/s Average min max 00:07:12.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14185.79 55.41 4511.13 769.65 23225.96 00:07:12.612 ======================================================== 00:07:12.612 Total : 14185.79 55.41 4511.13 769.65 23225.96 00:07:12.612 00:07:12.612 22:54:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:12.612 22:54:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:12.612 22:54:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:12.612 22:54:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:12.612 rmmod nvme_tcp 00:07:12.612 rmmod nvme_fabrics 00:07:12.612 rmmod nvme_keyring 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64303 ']' 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64303 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 64303 ']' 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 64303 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64303 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64303' 00:07:12.612 killing process with pid 64303 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 64303 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 64303 00:07:12.612 nvmf threads initialize successfully 00:07:12.612 bdev subsystem init successfully 00:07:12.612 created a nvmf target service 00:07:12.612 create targets's poll groups done 00:07:12.612 all subsystems of target started 00:07:12.612 nvmf target is running 00:07:12.612 all subsystems of target stopped 00:07:12.612 destroy targets's poll groups done 00:07:12.612 destroyed the nvmf target service 00:07:12.612 bdev subsystem finish successfully 00:07:12.612 nvmf threads destroy successfully 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 00:07:12.612 real 0m11.504s 00:07:12.612 user 0m40.942s 00:07:12.612 sys 0m1.952s 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.612 22:54:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 ************************************ 00:07:12.612 END TEST nvmf_example 00:07:12.612 ************************************ 00:07:12.612 22:54:23 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:12.612 22:54:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:12.612 22:54:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.612 22:54:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.612 ************************************ 00:07:12.612 START TEST nvmf_filesystem 00:07:12.612 ************************************ 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:12.612 * Looking for test storage... 00:07:12.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:12.612 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:12.613 #define SPDK_CONFIG_H 00:07:12.613 #define SPDK_CONFIG_APPS 1 00:07:12.613 #define SPDK_CONFIG_ARCH native 00:07:12.613 #undef SPDK_CONFIG_ASAN 00:07:12.613 #define SPDK_CONFIG_AVAHI 1 00:07:12.613 #undef SPDK_CONFIG_CET 00:07:12.613 #define SPDK_CONFIG_COVERAGE 1 00:07:12.613 #define SPDK_CONFIG_CROSS_PREFIX 00:07:12.613 #undef SPDK_CONFIG_CRYPTO 00:07:12.613 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:12.613 #undef SPDK_CONFIG_CUSTOMOCF 00:07:12.613 #undef SPDK_CONFIG_DAOS 00:07:12.613 #define SPDK_CONFIG_DAOS_DIR 00:07:12.613 #define SPDK_CONFIG_DEBUG 1 00:07:12.613 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:12.613 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:12.613 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:12.613 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:12.613 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:12.613 #undef SPDK_CONFIG_DPDK_UADK 00:07:12.613 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:12.613 #define SPDK_CONFIG_EXAMPLES 1 00:07:12.613 #undef SPDK_CONFIG_FC 00:07:12.613 #define SPDK_CONFIG_FC_PATH 00:07:12.613 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:12.613 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:12.613 #undef SPDK_CONFIG_FUSE 00:07:12.613 #undef SPDK_CONFIG_FUZZER 00:07:12.613 #define SPDK_CONFIG_FUZZER_LIB 00:07:12.613 #define SPDK_CONFIG_GOLANG 1 00:07:12.613 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:12.613 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:12.613 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:12.613 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:12.613 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:12.613 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:12.613 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:12.613 #define SPDK_CONFIG_IDXD 1 00:07:12.613 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:12.613 #undef SPDK_CONFIG_IPSEC_MB 00:07:12.613 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:12.613 #define SPDK_CONFIG_ISAL 1 00:07:12.613 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:12.613 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:12.613 #define SPDK_CONFIG_LIBDIR 00:07:12.613 #undef SPDK_CONFIG_LTO 00:07:12.613 #define SPDK_CONFIG_MAX_LCORES 00:07:12.613 #define SPDK_CONFIG_NVME_CUSE 1 00:07:12.613 #undef SPDK_CONFIG_OCF 00:07:12.613 #define SPDK_CONFIG_OCF_PATH 00:07:12.613 #define SPDK_CONFIG_OPENSSL_PATH 00:07:12.613 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:12.613 #define SPDK_CONFIG_PGO_DIR 00:07:12.613 #undef SPDK_CONFIG_PGO_USE 00:07:12.613 #define SPDK_CONFIG_PREFIX /usr/local 00:07:12.613 #undef SPDK_CONFIG_RAID5F 00:07:12.613 #undef SPDK_CONFIG_RBD 00:07:12.613 #define SPDK_CONFIG_RDMA 1 00:07:12.613 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:12.613 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:12.613 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:12.613 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:12.613 #define SPDK_CONFIG_SHARED 1 00:07:12.613 #undef SPDK_CONFIG_SMA 00:07:12.613 #define SPDK_CONFIG_TESTS 1 00:07:12.613 #undef SPDK_CONFIG_TSAN 00:07:12.613 #define SPDK_CONFIG_UBLK 1 00:07:12.613 #define SPDK_CONFIG_UBSAN 1 00:07:12.613 #undef SPDK_CONFIG_UNIT_TESTS 00:07:12.613 #undef SPDK_CONFIG_URING 00:07:12.613 #define SPDK_CONFIG_URING_PATH 00:07:12.613 #undef SPDK_CONFIG_URING_ZNS 00:07:12.613 #define SPDK_CONFIG_USDT 1 00:07:12.613 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:12.613 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:12.613 #undef SPDK_CONFIG_VFIO_USER 00:07:12.613 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:12.613 #define SPDK_CONFIG_VHOST 1 00:07:12.613 #define SPDK_CONFIG_VIRTIO 1 00:07:12.613 #undef SPDK_CONFIG_VTUNE 00:07:12.613 #define SPDK_CONFIG_VTUNE_DIR 00:07:12.613 #define SPDK_CONFIG_WERROR 1 00:07:12.613 #define SPDK_CONFIG_WPDK_DIR 00:07:12.613 #undef SPDK_CONFIG_XNVME 00:07:12.613 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.613 22:54:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:12.614 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 1 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 1 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 1 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 64533 ]] 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 64533 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.zRqOwQ 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:12.615 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.zRqOwQ/tests/target /tmp/spdk.zRqOwQ 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=4194304 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=4194304 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6264516608 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267891712 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=2494353408 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=2507157504 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12804096 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13815222272 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5208911872 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13815222272 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5208911872 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda2 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=843546624 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1012768768 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=100016128 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda3 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=92499968 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=104607744 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12107776 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6267756544 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267895808 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=139264 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1253572608 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253576704 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=93807161344 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5895618560 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:12.616 * Looking for test storage... 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/home 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=13815222272 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == tmpfs ]] 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == ramfs ]] 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ /home == / ]] 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:12.616 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:12.617 Cannot find device "nvmf_tgt_br" 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:12.617 Cannot find device "nvmf_tgt_br2" 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:12.617 Cannot find device "nvmf_tgt_br" 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:12.617 Cannot find device "nvmf_tgt_br2" 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:12.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:12.617 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:12.617 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:12.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:07:12.618 00:07:12.618 --- 10.0.0.2 ping statistics --- 00:07:12.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.618 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:12.618 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:12.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:07:12.618 00:07:12.618 --- 10.0.0.3 ping statistics --- 00:07:12.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.618 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:12.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:07:12.618 00:07:12.618 --- 10.0.0.1 ping statistics --- 00:07:12.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.618 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.618 ************************************ 00:07:12.618 START TEST nvmf_filesystem_no_in_capsule 00:07:12.618 ************************************ 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=64694 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 64694 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 64694 ']' 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.618 22:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.618 [2024-05-14 22:54:24.040117] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:12.618 [2024-05-14 22:54:24.040214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.618 [2024-05-14 22:54:24.181915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.618 [2024-05-14 22:54:24.258405] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.618 [2024-05-14 22:54:24.258476] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.618 [2024-05-14 22:54:24.258500] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.618 [2024-05-14 22:54:24.258510] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.618 [2024-05-14 22:54:24.258519] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.618 [2024-05-14 22:54:24.258586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.618 [2024-05-14 22:54:24.258684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.618 [2024-05-14 22:54:24.259258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.618 [2024-05-14 22:54:24.259322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.877 [2024-05-14 22:54:25.146282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.877 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.135 Malloc1 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.135 [2024-05-14 22:54:25.293293] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:13.135 [2024-05-14 22:54:25.293576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:13.135 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:13.136 { 00:07:13.136 "aliases": [ 00:07:13.136 "59affdb6-a45f-4d73-b7be-fbe676479d33" 00:07:13.136 ], 00:07:13.136 "assigned_rate_limits": { 00:07:13.136 "r_mbytes_per_sec": 0, 00:07:13.136 "rw_ios_per_sec": 0, 00:07:13.136 "rw_mbytes_per_sec": 0, 00:07:13.136 "w_mbytes_per_sec": 0 00:07:13.136 }, 00:07:13.136 "block_size": 512, 00:07:13.136 "claim_type": "exclusive_write", 00:07:13.136 "claimed": true, 00:07:13.136 "driver_specific": {}, 00:07:13.136 "memory_domains": [ 00:07:13.136 { 00:07:13.136 "dma_device_id": "system", 00:07:13.136 "dma_device_type": 1 00:07:13.136 }, 00:07:13.136 { 00:07:13.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.136 "dma_device_type": 2 00:07:13.136 } 00:07:13.136 ], 00:07:13.136 "name": "Malloc1", 00:07:13.136 "num_blocks": 1048576, 00:07:13.136 "product_name": "Malloc disk", 00:07:13.136 "supported_io_types": { 00:07:13.136 "abort": true, 00:07:13.136 "compare": false, 00:07:13.136 "compare_and_write": false, 00:07:13.136 "flush": true, 00:07:13.136 "nvme_admin": false, 00:07:13.136 "nvme_io": false, 00:07:13.136 "read": true, 00:07:13.136 "reset": true, 00:07:13.136 "unmap": true, 00:07:13.136 "write": true, 00:07:13.136 "write_zeroes": true 00:07:13.136 }, 00:07:13.136 "uuid": "59affdb6-a45f-4d73-b7be-fbe676479d33", 00:07:13.136 "zoned": false 00:07:13.136 } 00:07:13.136 ]' 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:13.136 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.394 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.394 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:13.394 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.394 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:13.394 22:54:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:15.368 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:15.626 22:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:16.562 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.563 ************************************ 00:07:16.563 START TEST filesystem_ext4 00:07:16.563 ************************************ 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:16.563 mke2fs 1.46.5 (30-Dec-2021) 00:07:16.563 Discarding device blocks: 0/522240 done 00:07:16.563 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:16.563 Filesystem UUID: 2de5c796-ab71-4374-a947-6b9352facef7 00:07:16.563 Superblock backups stored on blocks: 00:07:16.563 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:16.563 00:07:16.563 Allocating group tables: 0/64 done 00:07:16.563 Writing inode tables: 0/64 done 00:07:16.563 Creating journal (8192 blocks): done 00:07:16.563 Writing superblocks and filesystem accounting information: 0/64 done 00:07:16.563 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:16.563 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.821 22:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 64694 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.821 00:07:16.821 real 0m0.296s 00:07:16.821 user 0m0.023s 00:07:16.821 sys 0m0.049s 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:16.821 ************************************ 00:07:16.821 END TEST filesystem_ext4 00:07:16.821 ************************************ 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.821 ************************************ 00:07:16.821 START TEST filesystem_btrfs 00:07:16.821 ************************************ 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:16.821 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:17.080 btrfs-progs v6.6.2 00:07:17.080 See https://btrfs.readthedocs.io for more information. 00:07:17.080 00:07:17.080 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:17.080 NOTE: several default settings have changed in version 5.15, please make sure 00:07:17.080 this does not affect your deployments: 00:07:17.080 - DUP for metadata (-m dup) 00:07:17.080 - enabled no-holes (-O no-holes) 00:07:17.080 - enabled free-space-tree (-R free-space-tree) 00:07:17.080 00:07:17.080 Label: (null) 00:07:17.080 UUID: 3c97e3bd-7215-4be2-b46f-973ac39427c1 00:07:17.080 Node size: 16384 00:07:17.080 Sector size: 4096 00:07:17.080 Filesystem size: 510.00MiB 00:07:17.080 Block group profiles: 00:07:17.080 Data: single 8.00MiB 00:07:17.080 Metadata: DUP 32.00MiB 00:07:17.080 System: DUP 8.00MiB 00:07:17.080 SSD detected: yes 00:07:17.080 Zoned device: no 00:07:17.080 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:17.080 Runtime features: free-space-tree 00:07:17.080 Checksum: crc32c 00:07:17.080 Number of devices: 1 00:07:17.080 Devices: 00:07:17.080 ID SIZE PATH 00:07:17.080 1 510.00MiB /dev/nvme0n1p1 00:07:17.080 00:07:17.080 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:17.080 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:17.080 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:17.080 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:17.080 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:17.080 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 64694 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:17.081 00:07:17.081 real 0m0.178s 00:07:17.081 user 0m0.027s 00:07:17.081 sys 0m0.061s 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:17.081 ************************************ 00:07:17.081 END TEST filesystem_btrfs 00:07:17.081 ************************************ 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.081 ************************************ 00:07:17.081 START TEST filesystem_xfs 00:07:17.081 ************************************ 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:17.081 22:54:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:17.081 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:17.081 = sectsz=512 attr=2, projid32bit=1 00:07:17.081 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:17.081 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:17.081 data = bsize=4096 blocks=130560, imaxpct=25 00:07:17.081 = sunit=0 swidth=0 blks 00:07:17.081 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:17.081 log =internal log bsize=4096 blocks=16384, version=2 00:07:17.081 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:17.081 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:18.026 Discarding blocks...Done. 00:07:18.026 22:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:18.026 22:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 64694 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.554 00:07:20.554 real 0m3.274s 00:07:20.554 user 0m0.019s 00:07:20.554 sys 0m0.060s 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:20.554 ************************************ 00:07:20.554 END TEST filesystem_xfs 00:07:20.554 ************************************ 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:20.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 64694 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 64694 ']' 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 64694 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:20.554 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:20.555 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64694 00:07:20.555 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:20.555 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:20.555 killing process with pid 64694 00:07:20.555 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64694' 00:07:20.555 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 64694 00:07:20.555 [2024-05-14 22:54:32.788856] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:20.555 22:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 64694 00:07:20.813 ************************************ 00:07:20.813 END TEST nvmf_filesystem_no_in_capsule 00:07:20.813 ************************************ 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:20.813 00:07:20.813 real 0m9.104s 00:07:20.813 user 0m34.465s 00:07:20.813 sys 0m1.520s 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.813 ************************************ 00:07:20.813 START TEST nvmf_filesystem_in_capsule 00:07:20.813 ************************************ 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65005 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65005 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 65005 ']' 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:20.813 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.813 [2024-05-14 22:54:33.188699] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:20.813 [2024-05-14 22:54:33.188819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.071 [2024-05-14 22:54:33.329197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.071 [2024-05-14 22:54:33.389604] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.071 [2024-05-14 22:54:33.389870] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.071 [2024-05-14 22:54:33.390029] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.071 [2024-05-14 22:54:33.390176] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.071 [2024-05-14 22:54:33.390217] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.071 [2024-05-14 22:54:33.390445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.071 [2024-05-14 22:54:33.390543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.071 [2024-05-14 22:54:33.390659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.071 [2024-05-14 22:54:33.390663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.342 [2024-05-14 22:54:33.516310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.342 Malloc1 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.342 [2024-05-14 22:54:33.642267] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:21.342 [2024-05-14 22:54:33.642543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:21.342 { 00:07:21.342 "aliases": [ 00:07:21.342 "ffbefb72-b3e9-4b97-bb64-e86e77a880b1" 00:07:21.342 ], 00:07:21.342 "assigned_rate_limits": { 00:07:21.342 "r_mbytes_per_sec": 0, 00:07:21.342 "rw_ios_per_sec": 0, 00:07:21.342 "rw_mbytes_per_sec": 0, 00:07:21.342 "w_mbytes_per_sec": 0 00:07:21.342 }, 00:07:21.342 "block_size": 512, 00:07:21.342 "claim_type": "exclusive_write", 00:07:21.342 "claimed": true, 00:07:21.342 "driver_specific": {}, 00:07:21.342 "memory_domains": [ 00:07:21.342 { 00:07:21.342 "dma_device_id": "system", 00:07:21.342 "dma_device_type": 1 00:07:21.342 }, 00:07:21.342 { 00:07:21.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.342 "dma_device_type": 2 00:07:21.342 } 00:07:21.342 ], 00:07:21.342 "name": "Malloc1", 00:07:21.342 "num_blocks": 1048576, 00:07:21.342 "product_name": "Malloc disk", 00:07:21.342 "supported_io_types": { 00:07:21.342 "abort": true, 00:07:21.342 "compare": false, 00:07:21.342 "compare_and_write": false, 00:07:21.342 "flush": true, 00:07:21.342 "nvme_admin": false, 00:07:21.342 "nvme_io": false, 00:07:21.342 "read": true, 00:07:21.342 "reset": true, 00:07:21.342 "unmap": true, 00:07:21.342 "write": true, 00:07:21.342 "write_zeroes": true 00:07:21.342 }, 00:07:21.342 "uuid": "ffbefb72-b3e9-4b97-bb64-e86e77a880b1", 00:07:21.342 "zoned": false 00:07:21.342 } 00:07:21.342 ]' 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:21.342 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:21.608 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:21.608 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:21.608 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:21.608 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:21.608 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.608 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.608 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:21.608 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.608 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:21.608 22:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:24.158 22:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:24.158 22:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:24.158 22:54:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.740 ************************************ 00:07:24.740 START TEST filesystem_in_capsule_ext4 00:07:24.740 ************************************ 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:24.740 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:24.740 mke2fs 1.46.5 (30-Dec-2021) 00:07:24.997 Discarding device blocks: 0/522240 done 00:07:24.997 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:24.997 Filesystem UUID: 6d23b8f1-dc4d-4b42-8621-e4b1fadf01a8 00:07:24.997 Superblock backups stored on blocks: 00:07:24.997 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:24.997 00:07:24.997 Allocating group tables: 0/64 done 00:07:24.997 Writing inode tables: 0/64 done 00:07:24.997 Creating journal (8192 blocks): done 00:07:24.997 Writing superblocks and filesystem accounting information: 0/64 done 00:07:24.997 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65005 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:24.997 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.254 ************************************ 00:07:25.254 END TEST filesystem_in_capsule_ext4 00:07:25.254 ************************************ 00:07:25.254 00:07:25.254 real 0m0.302s 00:07:25.254 user 0m0.018s 00:07:25.254 sys 0m0.053s 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.254 ************************************ 00:07:25.254 START TEST filesystem_in_capsule_btrfs 00:07:25.254 ************************************ 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:25.254 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:25.254 btrfs-progs v6.6.2 00:07:25.254 See https://btrfs.readthedocs.io for more information. 00:07:25.254 00:07:25.254 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:25.254 NOTE: several default settings have changed in version 5.15, please make sure 00:07:25.254 this does not affect your deployments: 00:07:25.254 - DUP for metadata (-m dup) 00:07:25.254 - enabled no-holes (-O no-holes) 00:07:25.254 - enabled free-space-tree (-R free-space-tree) 00:07:25.254 00:07:25.254 Label: (null) 00:07:25.254 UUID: 4f236fe5-ba62-43cb-959e-487816f517f3 00:07:25.254 Node size: 16384 00:07:25.254 Sector size: 4096 00:07:25.255 Filesystem size: 510.00MiB 00:07:25.255 Block group profiles: 00:07:25.255 Data: single 8.00MiB 00:07:25.255 Metadata: DUP 32.00MiB 00:07:25.255 System: DUP 8.00MiB 00:07:25.255 SSD detected: yes 00:07:25.255 Zoned device: no 00:07:25.255 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:25.255 Runtime features: free-space-tree 00:07:25.255 Checksum: crc32c 00:07:25.255 Number of devices: 1 00:07:25.255 Devices: 00:07:25.255 ID SIZE PATH 00:07:25.255 1 510.00MiB /dev/nvme0n1p1 00:07:25.255 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65005 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.255 ************************************ 00:07:25.255 END TEST filesystem_in_capsule_btrfs 00:07:25.255 ************************************ 00:07:25.255 00:07:25.255 real 0m0.168s 00:07:25.255 user 0m0.015s 00:07:25.255 sys 0m0.058s 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.255 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.512 ************************************ 00:07:25.512 START TEST filesystem_in_capsule_xfs 00:07:25.512 ************************************ 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:25.512 22:54:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:25.512 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:25.512 = sectsz=512 attr=2, projid32bit=1 00:07:25.512 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:25.512 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:25.512 data = bsize=4096 blocks=130560, imaxpct=25 00:07:25.512 = sunit=0 swidth=0 blks 00:07:25.512 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:25.512 log =internal log bsize=4096 blocks=16384, version=2 00:07:25.512 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:25.512 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:26.076 Discarding blocks...Done. 00:07:26.077 22:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:26.077 22:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65005 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:27.976 ************************************ 00:07:27.976 END TEST filesystem_in_capsule_xfs 00:07:27.976 ************************************ 00:07:27.976 00:07:27.976 real 0m2.540s 00:07:27.976 user 0m0.020s 00:07:27.976 sys 0m0.051s 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:27.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65005 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 65005 ']' 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 65005 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65005 00:07:27.976 killing process with pid 65005 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65005' 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 65005 00:07:27.976 [2024-05-14 22:54:40.359021] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:27.976 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 65005 00:07:28.543 ************************************ 00:07:28.543 END TEST nvmf_filesystem_in_capsule 00:07:28.543 ************************************ 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:28.543 00:07:28.543 real 0m7.570s 00:07:28.543 user 0m28.051s 00:07:28.543 sys 0m1.481s 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.543 rmmod nvme_tcp 00:07:28.543 rmmod nvme_fabrics 00:07:28.543 rmmod nvme_keyring 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:28.543 ************************************ 00:07:28.543 END TEST nvmf_filesystem 00:07:28.543 ************************************ 00:07:28.543 00:07:28.543 real 0m17.456s 00:07:28.543 user 1m2.748s 00:07:28.543 sys 0m3.344s 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.543 22:54:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.543 22:54:40 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:28.543 22:54:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:28.543 22:54:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.543 22:54:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.543 ************************************ 00:07:28.543 START TEST nvmf_target_discovery 00:07:28.543 ************************************ 00:07:28.543 22:54:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:28.804 * Looking for test storage... 00:07:28.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.804 22:54:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:28.804 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:28.805 Cannot find device "nvmf_tgt_br" 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:28.805 Cannot find device "nvmf_tgt_br2" 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:28.805 Cannot find device "nvmf_tgt_br" 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:28.805 Cannot find device "nvmf_tgt_br2" 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:28.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:28.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:28.805 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.063 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:29.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:07:29.063 00:07:29.063 --- 10.0.0.2 ping statistics --- 00:07:29.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.064 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:29.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:29.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:07:29.064 00:07:29.064 --- 10.0.0.3 ping statistics --- 00:07:29.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.064 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:29.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:29.064 00:07:29.064 --- 10.0.0.1 ping statistics --- 00:07:29.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.064 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=65439 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 65439 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 65439 ']' 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:29.064 22:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:29.064 [2024-05-14 22:54:41.446401] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:29.064 [2024-05-14 22:54:41.446490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.322 [2024-05-14 22:54:41.587448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.322 [2024-05-14 22:54:41.657930] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.322 [2024-05-14 22:54:41.657990] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.322 [2024-05-14 22:54:41.658004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.322 [2024-05-14 22:54:41.658015] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.322 [2024-05-14 22:54:41.658023] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.322 [2024-05-14 22:54:41.658525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.322 [2024-05-14 22:54:41.658654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.322 [2024-05-14 22:54:41.658715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.322 [2024-05-14 22:54:41.658724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 [2024-05-14 22:54:42.507219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 Null1 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 [2024-05-14 22:54:42.570156] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:30.256 [2024-05-14 22:54:42.570590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 Null2 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.256 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.257 Null3 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.257 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.516 Null4 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -a 10.0.0.2 -s 4420 00:07:30.516 00:07:30.516 Discovery Log Number of Records 6, Generation counter 6 00:07:30.516 =====Discovery Log Entry 0====== 00:07:30.516 trtype: tcp 00:07:30.516 adrfam: ipv4 00:07:30.516 subtype: current discovery subsystem 00:07:30.516 treq: not required 00:07:30.516 portid: 0 00:07:30.516 trsvcid: 4420 00:07:30.516 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:30.516 traddr: 10.0.0.2 00:07:30.516 eflags: explicit discovery connections, duplicate discovery information 00:07:30.516 sectype: none 00:07:30.516 =====Discovery Log Entry 1====== 00:07:30.516 trtype: tcp 00:07:30.516 adrfam: ipv4 00:07:30.516 subtype: nvme subsystem 00:07:30.516 treq: not required 00:07:30.516 portid: 0 00:07:30.516 trsvcid: 4420 00:07:30.516 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:30.516 traddr: 10.0.0.2 00:07:30.516 eflags: none 00:07:30.516 sectype: none 00:07:30.516 =====Discovery Log Entry 2====== 00:07:30.516 trtype: tcp 00:07:30.516 adrfam: ipv4 00:07:30.516 subtype: nvme subsystem 00:07:30.516 treq: not required 00:07:30.516 portid: 0 00:07:30.516 trsvcid: 4420 00:07:30.516 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:30.516 traddr: 10.0.0.2 00:07:30.516 eflags: none 00:07:30.516 sectype: none 00:07:30.516 =====Discovery Log Entry 3====== 00:07:30.516 trtype: tcp 00:07:30.516 adrfam: ipv4 00:07:30.516 subtype: nvme subsystem 00:07:30.516 treq: not required 00:07:30.516 portid: 0 00:07:30.516 trsvcid: 4420 00:07:30.516 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:30.516 traddr: 10.0.0.2 00:07:30.516 eflags: none 00:07:30.516 sectype: none 00:07:30.516 =====Discovery Log Entry 4====== 00:07:30.516 trtype: tcp 00:07:30.516 adrfam: ipv4 00:07:30.516 subtype: nvme subsystem 00:07:30.516 treq: not required 00:07:30.516 portid: 0 00:07:30.516 trsvcid: 4420 00:07:30.516 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:30.516 traddr: 10.0.0.2 00:07:30.516 eflags: none 00:07:30.516 sectype: none 00:07:30.516 =====Discovery Log Entry 5====== 00:07:30.516 trtype: tcp 00:07:30.516 adrfam: ipv4 00:07:30.516 subtype: discovery subsystem referral 00:07:30.516 treq: not required 00:07:30.516 portid: 0 00:07:30.516 trsvcid: 4430 00:07:30.516 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:30.516 traddr: 10.0.0.2 00:07:30.516 eflags: none 00:07:30.516 sectype: none 00:07:30.516 Perform nvmf subsystem discovery via RPC 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.516 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.516 [ 00:07:30.516 { 00:07:30.516 "allow_any_host": true, 00:07:30.516 "hosts": [], 00:07:30.516 "listen_addresses": [ 00:07:30.516 { 00:07:30.516 "adrfam": "IPv4", 00:07:30.516 "traddr": "10.0.0.2", 00:07:30.516 "trsvcid": "4420", 00:07:30.516 "trtype": "TCP" 00:07:30.516 } 00:07:30.516 ], 00:07:30.516 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:30.516 "subtype": "Discovery" 00:07:30.516 }, 00:07:30.516 { 00:07:30.516 "allow_any_host": true, 00:07:30.516 "hosts": [], 00:07:30.516 "listen_addresses": [ 00:07:30.516 { 00:07:30.516 "adrfam": "IPv4", 00:07:30.516 "traddr": "10.0.0.2", 00:07:30.516 "trsvcid": "4420", 00:07:30.516 "trtype": "TCP" 00:07:30.516 } 00:07:30.516 ], 00:07:30.516 "max_cntlid": 65519, 00:07:30.516 "max_namespaces": 32, 00:07:30.516 "min_cntlid": 1, 00:07:30.516 "model_number": "SPDK bdev Controller", 00:07:30.516 "namespaces": [ 00:07:30.516 { 00:07:30.516 "bdev_name": "Null1", 00:07:30.516 "name": "Null1", 00:07:30.516 "nguid": "ED4132C64308416A84FA4C55BF724197", 00:07:30.516 "nsid": 1, 00:07:30.516 "uuid": "ed4132c6-4308-416a-84fa-4c55bf724197" 00:07:30.516 } 00:07:30.516 ], 00:07:30.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.516 "serial_number": "SPDK00000000000001", 00:07:30.516 "subtype": "NVMe" 00:07:30.516 }, 00:07:30.516 { 00:07:30.516 "allow_any_host": true, 00:07:30.516 "hosts": [], 00:07:30.516 "listen_addresses": [ 00:07:30.516 { 00:07:30.516 "adrfam": "IPv4", 00:07:30.516 "traddr": "10.0.0.2", 00:07:30.516 "trsvcid": "4420", 00:07:30.516 "trtype": "TCP" 00:07:30.516 } 00:07:30.516 ], 00:07:30.516 "max_cntlid": 65519, 00:07:30.516 "max_namespaces": 32, 00:07:30.516 "min_cntlid": 1, 00:07:30.516 "model_number": "SPDK bdev Controller", 00:07:30.516 "namespaces": [ 00:07:30.516 { 00:07:30.516 "bdev_name": "Null2", 00:07:30.516 "name": "Null2", 00:07:30.516 "nguid": "76AD165A551F4351B42A4D157AEDFAE8", 00:07:30.516 "nsid": 1, 00:07:30.516 "uuid": "76ad165a-551f-4351-b42a-4d157aedfae8" 00:07:30.516 } 00:07:30.516 ], 00:07:30.516 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:30.516 "serial_number": "SPDK00000000000002", 00:07:30.516 "subtype": "NVMe" 00:07:30.516 }, 00:07:30.516 { 00:07:30.516 "allow_any_host": true, 00:07:30.516 "hosts": [], 00:07:30.516 "listen_addresses": [ 00:07:30.516 { 00:07:30.516 "adrfam": "IPv4", 00:07:30.516 "traddr": "10.0.0.2", 00:07:30.516 "trsvcid": "4420", 00:07:30.516 "trtype": "TCP" 00:07:30.516 } 00:07:30.516 ], 00:07:30.516 "max_cntlid": 65519, 00:07:30.516 "max_namespaces": 32, 00:07:30.516 "min_cntlid": 1, 00:07:30.516 "model_number": "SPDK bdev Controller", 00:07:30.516 "namespaces": [ 00:07:30.516 { 00:07:30.516 "bdev_name": "Null3", 00:07:30.516 "name": "Null3", 00:07:30.516 "nguid": "5D8C32F5CE8F47518AE058C7ED2961C6", 00:07:30.516 "nsid": 1, 00:07:30.516 "uuid": "5d8c32f5-ce8f-4751-8ae0-58c7ed2961c6" 00:07:30.516 } 00:07:30.516 ], 00:07:30.516 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:30.516 "serial_number": "SPDK00000000000003", 00:07:30.516 "subtype": "NVMe" 00:07:30.516 }, 00:07:30.516 { 00:07:30.516 "allow_any_host": true, 00:07:30.516 "hosts": [], 00:07:30.516 "listen_addresses": [ 00:07:30.516 { 00:07:30.516 "adrfam": "IPv4", 00:07:30.516 "traddr": "10.0.0.2", 00:07:30.516 "trsvcid": "4420", 00:07:30.516 "trtype": "TCP" 00:07:30.516 } 00:07:30.516 ], 00:07:30.516 "max_cntlid": 65519, 00:07:30.516 "max_namespaces": 32, 00:07:30.516 "min_cntlid": 1, 00:07:30.516 "model_number": "SPDK bdev Controller", 00:07:30.516 "namespaces": [ 00:07:30.516 { 00:07:30.516 "bdev_name": "Null4", 00:07:30.516 "name": "Null4", 00:07:30.517 "nguid": "0E33558F825B4B36BAC01BD9306092F4", 00:07:30.517 "nsid": 1, 00:07:30.517 "uuid": "0e33558f-825b-4b36-bac0-1bd9306092f4" 00:07:30.517 } 00:07:30.517 ], 00:07:30.517 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:30.517 "serial_number": "SPDK00000000000004", 00:07:30.517 "subtype": "NVMe" 00:07:30.517 } 00:07:30.517 ] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:30.517 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.775 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:30.775 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:30.775 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:30.775 22:54:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:30.775 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.775 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:30.775 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.775 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.776 rmmod nvme_tcp 00:07:30.776 rmmod nvme_fabrics 00:07:30.776 rmmod nvme_keyring 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 65439 ']' 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 65439 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 65439 ']' 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 65439 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:30.776 22:54:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:30.776 22:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65439 00:07:30.776 22:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:30.776 22:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:30.776 killing process with pid 65439 00:07:30.776 22:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65439' 00:07:30.776 22:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 65439 00:07:30.776 [2024-05-14 22:54:43.019069] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:30.776 22:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 65439 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:31.037 00:07:31.037 real 0m2.347s 00:07:31.037 user 0m6.466s 00:07:31.037 sys 0m0.544s 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.037 22:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.037 ************************************ 00:07:31.037 END TEST nvmf_target_discovery 00:07:31.037 ************************************ 00:07:31.037 22:54:43 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:31.037 22:54:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:31.037 22:54:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.037 22:54:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:31.037 ************************************ 00:07:31.037 START TEST nvmf_referrals 00:07:31.037 ************************************ 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:31.037 * Looking for test storage... 00:07:31.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.037 22:54:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:31.038 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:31.294 Cannot find device "nvmf_tgt_br" 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:31.294 Cannot find device "nvmf_tgt_br2" 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:31.294 Cannot find device "nvmf_tgt_br" 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:31.294 Cannot find device "nvmf_tgt_br2" 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:31.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:31.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:31.294 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:31.551 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:31.551 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:31.551 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:31.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:07:31.551 00:07:31.551 --- 10.0.0.2 ping statistics --- 00:07:31.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.552 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:31.552 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:31.552 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:07:31.552 00:07:31.552 --- 10.0.0.3 ping statistics --- 00:07:31.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.552 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:31.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:31.552 00:07:31.552 --- 10.0.0.1 ping statistics --- 00:07:31.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.552 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=65668 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 65668 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 65668 ']' 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:31.552 22:54:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.552 [2024-05-14 22:54:43.792702] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:31.552 [2024-05-14 22:54:43.792811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.552 [2024-05-14 22:54:43.929956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.811 [2024-05-14 22:54:43.988481] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.811 [2024-05-14 22:54:43.988545] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.811 [2024-05-14 22:54:43.988563] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.811 [2024-05-14 22:54:43.988577] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.811 [2024-05-14 22:54:43.988588] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.811 [2024-05-14 22:54:43.988742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.811 [2024-05-14 22:54:43.988911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.811 [2024-05-14 22:54:43.989657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.811 [2024-05-14 22:54:43.989706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.811 [2024-05-14 22:54:44.109946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.811 [2024-05-14 22:54:44.147067] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:31.811 [2024-05-14 22:54:44.147469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.811 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.070 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:32.328 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:32.586 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:32.586 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.586 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:32.586 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:32.586 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:32.586 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:32.586 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:32.586 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.586 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:32.587 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:32.845 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:32.845 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:32.845 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:32.845 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:32.845 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:32.845 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.845 22:54:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:32.845 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:33.103 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:33.103 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:33.103 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:33.103 22:54:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:33.103 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.103 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:33.377 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.377 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:33.377 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.377 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.377 rmmod nvme_tcp 00:07:33.377 rmmod nvme_fabrics 00:07:33.653 rmmod nvme_keyring 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 65668 ']' 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 65668 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 65668 ']' 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 65668 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65668 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65668' 00:07:33.653 killing process with pid 65668 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 65668 00:07:33.653 [2024-05-14 22:54:45.808667] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 65668 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.653 22:54:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.653 22:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:33.653 00:07:33.653 real 0m2.738s 00:07:33.653 user 0m8.806s 00:07:33.653 sys 0m0.744s 00:07:33.653 22:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.653 22:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:33.653 ************************************ 00:07:33.653 END TEST nvmf_referrals 00:07:33.653 ************************************ 00:07:33.911 22:54:46 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:33.911 22:54:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:33.911 22:54:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.911 22:54:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.911 ************************************ 00:07:33.911 START TEST nvmf_connect_disconnect 00:07:33.911 ************************************ 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:33.911 * Looking for test storage... 00:07:33.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.911 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:33.912 Cannot find device "nvmf_tgt_br" 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:33.912 Cannot find device "nvmf_tgt_br2" 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:33.912 Cannot find device "nvmf_tgt_br" 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:33.912 Cannot find device "nvmf_tgt_br2" 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:33.912 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:34.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:34.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:34.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:07:34.170 00:07:34.170 --- 10.0.0.2 ping statistics --- 00:07:34.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.170 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:34.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:34.170 00:07:34.170 --- 10.0.0.3 ping statistics --- 00:07:34.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.170 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:34.170 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:07:34.170 00:07:34.170 --- 10.0.0.1 ping statistics --- 00:07:34.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.171 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=65960 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 65960 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 65960 ']' 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:34.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:34.171 22:54:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:34.430 [2024-05-14 22:54:46.575239] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:34.430 [2024-05-14 22:54:46.575331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.430 [2024-05-14 22:54:46.714217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.430 [2024-05-14 22:54:46.788177] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.430 [2024-05-14 22:54:46.788258] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.430 [2024-05-14 22:54:46.788275] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.430 [2024-05-14 22:54:46.788285] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.430 [2024-05-14 22:54:46.788294] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.430 [2024-05-14 22:54:46.788426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.430 [2024-05-14 22:54:46.788573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.430 [2024-05-14 22:54:46.789281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.430 [2024-05-14 22:54:46.789288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:35.367 [2024-05-14 22:54:47.722899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.367 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:35.625 [2024-05-14 22:54:47.790640] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:35.625 [2024-05-14 22:54:47.790909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:35.625 22:54:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:38.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.026 22:54:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:47.026 22:54:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:47.026 22:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:47.026 22:54:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:47.026 rmmod nvme_tcp 00:07:47.026 rmmod nvme_fabrics 00:07:47.026 rmmod nvme_keyring 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 65960 ']' 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 65960 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 65960 ']' 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 65960 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65960 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65960' 00:07:47.026 killing process with pid 65960 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 65960 00:07:47.026 [2024-05-14 22:54:59.085824] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 65960 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:47.026 00:07:47.026 real 0m13.255s 00:07:47.026 user 0m48.899s 00:07:47.026 sys 0m1.887s 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.026 22:54:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:47.026 ************************************ 00:07:47.026 END TEST nvmf_connect_disconnect 00:07:47.026 ************************************ 00:07:47.026 22:54:59 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:47.026 22:54:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:47.026 22:54:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.026 22:54:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.026 ************************************ 00:07:47.026 START TEST nvmf_multitarget 00:07:47.026 ************************************ 00:07:47.026 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:47.292 * Looking for test storage... 00:07:47.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:47.292 Cannot find device "nvmf_tgt_br" 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.292 Cannot find device "nvmf_tgt_br2" 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:47.292 Cannot find device "nvmf_tgt_br" 00:07:47.292 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:47.293 Cannot find device "nvmf_tgt_br2" 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:47.293 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:47.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:07:47.558 00:07:47.558 --- 10.0.0.2 ping statistics --- 00:07:47.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.558 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:47.558 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:47.558 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:47.558 00:07:47.558 --- 10.0.0.3 ping statistics --- 00:07:47.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.558 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:47.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:07:47.558 00:07:47.558 --- 10.0.0.1 ping statistics --- 00:07:47.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.558 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=66363 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 66363 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 66363 ']' 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:47.558 22:54:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:47.558 [2024-05-14 22:54:59.880543] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:47.558 [2024-05-14 22:54:59.880639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.815 [2024-05-14 22:55:00.047554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.815 [2024-05-14 22:55:00.134665] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.815 [2024-05-14 22:55:00.134754] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.815 [2024-05-14 22:55:00.134789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.815 [2024-05-14 22:55:00.134802] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.815 [2024-05-14 22:55:00.134812] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.815 [2024-05-14 22:55:00.135342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.815 [2024-05-14 22:55:00.135430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.815 [2024-05-14 22:55:00.136053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.815 [2024-05-14 22:55:00.136062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.749 22:55:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.749 22:55:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:07:48.749 22:55:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.749 22:55:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.749 22:55:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:48.749 22:55:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.749 22:55:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:48.749 22:55:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:48.749 22:55:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:48.749 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:48.749 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:49.007 "nvmf_tgt_1" 00:07:49.007 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:49.007 "nvmf_tgt_2" 00:07:49.007 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:49.007 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:49.265 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:49.265 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:49.265 true 00:07:49.265 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:49.523 true 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.523 rmmod nvme_tcp 00:07:49.523 rmmod nvme_fabrics 00:07:49.523 rmmod nvme_keyring 00:07:49.523 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 66363 ']' 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 66363 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 66363 ']' 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 66363 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66363 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:49.780 killing process with pid 66363 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66363' 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 66363 00:07:49.780 22:55:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 66363 00:07:49.780 22:55:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:49.780 22:55:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:49.780 22:55:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:49.780 22:55:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.780 22:55:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:49.780 22:55:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.780 22:55:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.780 22:55:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.037 22:55:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:50.037 00:07:50.037 real 0m2.806s 00:07:50.037 user 0m9.144s 00:07:50.037 sys 0m0.618s 00:07:50.037 22:55:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.037 22:55:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:50.037 ************************************ 00:07:50.037 END TEST nvmf_multitarget 00:07:50.037 ************************************ 00:07:50.038 22:55:02 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:50.038 22:55:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:50.038 22:55:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.038 22:55:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.038 ************************************ 00:07:50.038 START TEST nvmf_rpc 00:07:50.038 ************************************ 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:50.038 * Looking for test storage... 00:07:50.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:50.038 Cannot find device "nvmf_tgt_br" 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:50.038 Cannot find device "nvmf_tgt_br2" 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:50.038 Cannot find device "nvmf_tgt_br" 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:50.038 Cannot find device "nvmf_tgt_br2" 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:50.038 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:50.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:50.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:50.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:07:50.295 00:07:50.295 --- 10.0.0.2 ping statistics --- 00:07:50.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.295 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:50.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:50.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:07:50.295 00:07:50.295 --- 10.0.0.3 ping statistics --- 00:07:50.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.295 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:50.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:07:50.295 00:07:50.295 --- 10.0.0.1 ping statistics --- 00:07:50.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.295 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:50.295 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=66594 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 66594 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 66594 ']' 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:50.296 22:55:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.552 [2024-05-14 22:55:02.734603] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:50.552 [2024-05-14 22:55:02.734722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.552 [2024-05-14 22:55:02.876252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.552 [2024-05-14 22:55:02.937598] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.552 [2024-05-14 22:55:02.937670] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.552 [2024-05-14 22:55:02.937690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.552 [2024-05-14 22:55:02.937703] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.552 [2024-05-14 22:55:02.937714] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.814 [2024-05-14 22:55:02.941808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.814 [2024-05-14 22:55:02.941895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.814 [2024-05-14 22:55:02.941974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.814 [2024-05-14 22:55:02.941993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:50.814 "poll_groups": [ 00:07:50.814 { 00:07:50.814 "admin_qpairs": 0, 00:07:50.814 "completed_nvme_io": 0, 00:07:50.814 "current_admin_qpairs": 0, 00:07:50.814 "current_io_qpairs": 0, 00:07:50.814 "io_qpairs": 0, 00:07:50.814 "name": "nvmf_tgt_poll_group_000", 00:07:50.814 "pending_bdev_io": 0, 00:07:50.814 "transports": [] 00:07:50.814 }, 00:07:50.814 { 00:07:50.814 "admin_qpairs": 0, 00:07:50.814 "completed_nvme_io": 0, 00:07:50.814 "current_admin_qpairs": 0, 00:07:50.814 "current_io_qpairs": 0, 00:07:50.814 "io_qpairs": 0, 00:07:50.814 "name": "nvmf_tgt_poll_group_001", 00:07:50.814 "pending_bdev_io": 0, 00:07:50.814 "transports": [] 00:07:50.814 }, 00:07:50.814 { 00:07:50.814 "admin_qpairs": 0, 00:07:50.814 "completed_nvme_io": 0, 00:07:50.814 "current_admin_qpairs": 0, 00:07:50.814 "current_io_qpairs": 0, 00:07:50.814 "io_qpairs": 0, 00:07:50.814 "name": "nvmf_tgt_poll_group_002", 00:07:50.814 "pending_bdev_io": 0, 00:07:50.814 "transports": [] 00:07:50.814 }, 00:07:50.814 { 00:07:50.814 "admin_qpairs": 0, 00:07:50.814 "completed_nvme_io": 0, 00:07:50.814 "current_admin_qpairs": 0, 00:07:50.814 "current_io_qpairs": 0, 00:07:50.814 "io_qpairs": 0, 00:07:50.814 "name": "nvmf_tgt_poll_group_003", 00:07:50.814 "pending_bdev_io": 0, 00:07:50.814 "transports": [] 00:07:50.814 } 00:07:50.814 ], 00:07:50.814 "tick_rate": 2200000000 00:07:50.814 }' 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.814 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.814 [2024-05-14 22:55:03.182905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:51.073 "poll_groups": [ 00:07:51.073 { 00:07:51.073 "admin_qpairs": 0, 00:07:51.073 "completed_nvme_io": 0, 00:07:51.073 "current_admin_qpairs": 0, 00:07:51.073 "current_io_qpairs": 0, 00:07:51.073 "io_qpairs": 0, 00:07:51.073 "name": "nvmf_tgt_poll_group_000", 00:07:51.073 "pending_bdev_io": 0, 00:07:51.073 "transports": [ 00:07:51.073 { 00:07:51.073 "trtype": "TCP" 00:07:51.073 } 00:07:51.073 ] 00:07:51.073 }, 00:07:51.073 { 00:07:51.073 "admin_qpairs": 0, 00:07:51.073 "completed_nvme_io": 0, 00:07:51.073 "current_admin_qpairs": 0, 00:07:51.073 "current_io_qpairs": 0, 00:07:51.073 "io_qpairs": 0, 00:07:51.073 "name": "nvmf_tgt_poll_group_001", 00:07:51.073 "pending_bdev_io": 0, 00:07:51.073 "transports": [ 00:07:51.073 { 00:07:51.073 "trtype": "TCP" 00:07:51.073 } 00:07:51.073 ] 00:07:51.073 }, 00:07:51.073 { 00:07:51.073 "admin_qpairs": 0, 00:07:51.073 "completed_nvme_io": 0, 00:07:51.073 "current_admin_qpairs": 0, 00:07:51.073 "current_io_qpairs": 0, 00:07:51.073 "io_qpairs": 0, 00:07:51.073 "name": "nvmf_tgt_poll_group_002", 00:07:51.073 "pending_bdev_io": 0, 00:07:51.073 "transports": [ 00:07:51.073 { 00:07:51.073 "trtype": "TCP" 00:07:51.073 } 00:07:51.073 ] 00:07:51.073 }, 00:07:51.073 { 00:07:51.073 "admin_qpairs": 0, 00:07:51.073 "completed_nvme_io": 0, 00:07:51.073 "current_admin_qpairs": 0, 00:07:51.073 "current_io_qpairs": 0, 00:07:51.073 "io_qpairs": 0, 00:07:51.073 "name": "nvmf_tgt_poll_group_003", 00:07:51.073 "pending_bdev_io": 0, 00:07:51.073 "transports": [ 00:07:51.073 { 00:07:51.073 "trtype": "TCP" 00:07:51.073 } 00:07:51.073 ] 00:07:51.073 } 00:07:51.073 ], 00:07:51.073 "tick_rate": 2200000000 00:07:51.073 }' 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.073 Malloc1 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.073 [2024-05-14 22:55:03.385904] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:51.073 [2024-05-14 22:55:03.386293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de -a 10.0.0.2 -s 4420 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de -a 10.0.0.2 -s 4420 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de -a 10.0.0.2 -s 4420 00:07:51.073 [2024-05-14 22:55:03.404404] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de' 00:07:51.073 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:51.073 could not add new controller: failed to write to nvme-fabrics device 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:51.073 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.074 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.074 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.074 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:51.331 22:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:51.331 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:51.331 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:51.331 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:51.331 22:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:53.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:53.271 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.529 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:53.529 22:55:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:07:53.529 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.530 [2024-05-14 22:55:05.685717] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de' 00:07:53.530 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:53.530 could not add new controller: failed to write to nvme-fabrics device 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:53.530 22:55:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:56.054 22:55:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.055 [2024-05-14 22:55:07.963825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.055 22:55:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:56.055 22:55:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.055 22:55:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:56.055 22:55:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.055 22:55:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:56.055 22:55:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:57.951 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 [2024-05-14 22:55:10.242612] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.952 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:58.209 22:55:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:58.209 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:58.209 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:58.209 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:58.209 22:55:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:00.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:00.106 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.370 [2024-05-14 22:55:12.541968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:00.370 22:55:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.895 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.896 [2024-05-14 22:55:14.835898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.896 22:55:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.896 22:55:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.896 22:55:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:02.896 22:55:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.896 22:55:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:02.896 22:55:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:04.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.795 [2024-05-14 22:55:17.135405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.795 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:05.054 22:55:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.054 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:05.054 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.054 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:05.054 22:55:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:06.954 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:06.954 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:06.954 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:06.954 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:06.954 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:06.954 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:06.954 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:07.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.211 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:07.211 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:07.211 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.211 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:07.211 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 [2024-05-14 22:55:19.434471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 [2024-05-14 22:55:19.482505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 [2024-05-14 22:55:19.538554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.212 [2024-05-14 22:55:19.590682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.212 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 [2024-05-14 22:55:19.642750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.470 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:07.470 "poll_groups": [ 00:08:07.470 { 00:08:07.470 "admin_qpairs": 2, 00:08:07.470 "completed_nvme_io": 115, 00:08:07.470 "current_admin_qpairs": 0, 00:08:07.470 "current_io_qpairs": 0, 00:08:07.470 "io_qpairs": 16, 00:08:07.470 "name": "nvmf_tgt_poll_group_000", 00:08:07.470 "pending_bdev_io": 0, 00:08:07.470 "transports": [ 00:08:07.470 { 00:08:07.470 "trtype": "TCP" 00:08:07.470 } 00:08:07.470 ] 00:08:07.470 }, 00:08:07.471 { 00:08:07.471 "admin_qpairs": 3, 00:08:07.471 "completed_nvme_io": 214, 00:08:07.471 "current_admin_qpairs": 0, 00:08:07.471 "current_io_qpairs": 0, 00:08:07.471 "io_qpairs": 17, 00:08:07.471 "name": "nvmf_tgt_poll_group_001", 00:08:07.471 "pending_bdev_io": 0, 00:08:07.471 "transports": [ 00:08:07.471 { 00:08:07.471 "trtype": "TCP" 00:08:07.471 } 00:08:07.471 ] 00:08:07.471 }, 00:08:07.471 { 00:08:07.471 "admin_qpairs": 1, 00:08:07.471 "completed_nvme_io": 71, 00:08:07.471 "current_admin_qpairs": 0, 00:08:07.471 "current_io_qpairs": 0, 00:08:07.471 "io_qpairs": 19, 00:08:07.471 "name": "nvmf_tgt_poll_group_002", 00:08:07.471 "pending_bdev_io": 0, 00:08:07.471 "transports": [ 00:08:07.471 { 00:08:07.471 "trtype": "TCP" 00:08:07.471 } 00:08:07.471 ] 00:08:07.471 }, 00:08:07.471 { 00:08:07.471 "admin_qpairs": 1, 00:08:07.471 "completed_nvme_io": 20, 00:08:07.471 "current_admin_qpairs": 0, 00:08:07.471 "current_io_qpairs": 0, 00:08:07.471 "io_qpairs": 18, 00:08:07.471 "name": "nvmf_tgt_poll_group_003", 00:08:07.471 "pending_bdev_io": 0, 00:08:07.471 "transports": [ 00:08:07.471 { 00:08:07.471 "trtype": "TCP" 00:08:07.471 } 00:08:07.471 ] 00:08:07.471 } 00:08:07.471 ], 00:08:07.471 "tick_rate": 2200000000 00:08:07.471 }' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.471 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.471 rmmod nvme_tcp 00:08:07.471 rmmod nvme_fabrics 00:08:07.728 rmmod nvme_keyring 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 66594 ']' 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 66594 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 66594 ']' 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 66594 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66594 00:08:07.728 killing process with pid 66594 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66594' 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 66594 00:08:07.728 [2024-05-14 22:55:19.907237] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:07.728 22:55:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 66594 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:07.986 00:08:07.986 real 0m17.929s 00:08:07.986 user 1m6.624s 00:08:07.986 sys 0m2.644s 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.986 22:55:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.986 ************************************ 00:08:07.986 END TEST nvmf_rpc 00:08:07.986 ************************************ 00:08:07.986 22:55:20 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:07.986 22:55:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:07.986 22:55:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.986 22:55:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:07.986 ************************************ 00:08:07.986 START TEST nvmf_invalid 00:08:07.986 ************************************ 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:07.986 * Looking for test storage... 00:08:07.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.986 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:07.987 Cannot find device "nvmf_tgt_br" 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.987 Cannot find device "nvmf_tgt_br2" 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:07.987 Cannot find device "nvmf_tgt_br" 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:08:07.987 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:07.987 Cannot find device "nvmf_tgt_br2" 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:08.244 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:08.245 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:08.245 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:08.245 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:08.245 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:08.245 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:08.245 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:08.245 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:08.245 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:08.245 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:08.245 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:08.502 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:08.502 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:08.502 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:08.502 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:08.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:08:08.502 00:08:08.502 --- 10.0.0.2 ping statistics --- 00:08:08.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.502 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:08:08.502 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:08.502 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:08.502 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:08:08.502 00:08:08.502 --- 10.0.0.3 ping statistics --- 00:08:08.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.502 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:08.502 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:08.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:08:08.502 00:08:08.502 --- 10.0.0.1 ping statistics --- 00:08:08.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.502 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:08.502 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.502 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67087 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67087 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 67087 ']' 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:08.503 22:55:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:08.503 [2024-05-14 22:55:20.761548] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:08:08.503 [2024-05-14 22:55:20.761647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.761 [2024-05-14 22:55:20.897526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.761 [2024-05-14 22:55:20.958596] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.761 [2024-05-14 22:55:20.958655] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.761 [2024-05-14 22:55:20.958672] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.761 [2024-05-14 22:55:20.958680] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.761 [2024-05-14 22:55:20.958687] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.761 [2024-05-14 22:55:20.958825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.761 [2024-05-14 22:55:20.958863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.761 [2024-05-14 22:55:20.959784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.761 [2024-05-14 22:55:20.959794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.761 22:55:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:08.761 22:55:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:08:08.761 22:55:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.761 22:55:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.761 22:55:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:08.761 22:55:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.761 22:55:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:08.761 22:55:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18691 00:08:09.326 [2024-05-14 22:55:21.488982] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:09.327 22:55:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/05/14 22:55:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18691 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:08:09.327 request: 00:08:09.327 { 00:08:09.327 "method": "nvmf_create_subsystem", 00:08:09.327 "params": { 00:08:09.327 "nqn": "nqn.2016-06.io.spdk:cnode18691", 00:08:09.327 "tgt_name": "foobar" 00:08:09.327 } 00:08:09.327 } 00:08:09.327 Got JSON-RPC error response 00:08:09.327 GoRPCClient: error on JSON-RPC call' 00:08:09.327 22:55:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/05/14 22:55:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18691 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:08:09.327 request: 00:08:09.327 { 00:08:09.327 "method": "nvmf_create_subsystem", 00:08:09.327 "params": { 00:08:09.327 "nqn": "nqn.2016-06.io.spdk:cnode18691", 00:08:09.327 "tgt_name": "foobar" 00:08:09.327 } 00:08:09.327 } 00:08:09.327 Got JSON-RPC error response 00:08:09.327 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:09.327 22:55:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:09.327 22:55:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27074 00:08:09.585 [2024-05-14 22:55:21.809346] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27074: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:09.585 22:55:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/05/14 22:55:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27074 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:08:09.585 request: 00:08:09.585 { 00:08:09.585 "method": "nvmf_create_subsystem", 00:08:09.585 "params": { 00:08:09.585 "nqn": "nqn.2016-06.io.spdk:cnode27074", 00:08:09.585 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:08:09.585 } 00:08:09.585 } 00:08:09.585 Got JSON-RPC error response 00:08:09.585 GoRPCClient: error on JSON-RPC call' 00:08:09.585 22:55:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/05/14 22:55:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27074 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:08:09.585 request: 00:08:09.585 { 00:08:09.585 "method": "nvmf_create_subsystem", 00:08:09.585 "params": { 00:08:09.585 "nqn": "nqn.2016-06.io.spdk:cnode27074", 00:08:09.585 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:08:09.585 } 00:08:09.585 } 00:08:09.585 Got JSON-RPC error response 00:08:09.585 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:09.585 22:55:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:09.585 22:55:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7918 00:08:09.843 [2024-05-14 22:55:22.161665] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7918: invalid model number 'SPDK_Controller' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/05/14 22:55:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode7918], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:08:09.843 request: 00:08:09.843 { 00:08:09.843 "method": "nvmf_create_subsystem", 00:08:09.843 "params": { 00:08:09.843 "nqn": "nqn.2016-06.io.spdk:cnode7918", 00:08:09.843 "model_number": "SPDK_Controller\u001f" 00:08:09.843 } 00:08:09.843 } 00:08:09.843 Got JSON-RPC error response 00:08:09.843 GoRPCClient: error on JSON-RPC call' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/05/14 22:55:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode7918], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:08:09.843 request: 00:08:09.843 { 00:08:09.843 "method": "nvmf_create_subsystem", 00:08:09.843 "params": { 00:08:09.843 "nqn": "nqn.2016-06.io.spdk:cnode7918", 00:08:09.843 "model_number": "SPDK_Controller\u001f" 00:08:09.843 } 00:08:09.843 } 00:08:09.843 Got JSON-RPC error response 00:08:09.843 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:09.843 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ < == \- ]] 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '<<(y=H2S.E>1APH-y9djn' 00:08:10.102 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '<<(y=H2S.E>1APH-y9djn' nqn.2016-06.io.spdk:cnode10287 00:08:10.361 [2024-05-14 22:55:22.578029] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10287: invalid serial number '<<(y=H2S.E>1APH-y9djn' 00:08:10.361 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/05/14 22:55:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10287 serial_number:<<(y=H2S.E>1APH-y9djn], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN <<(y=H2S.E>1APH-y9djn 00:08:10.361 request: 00:08:10.361 { 00:08:10.361 "method": "nvmf_create_subsystem", 00:08:10.361 "params": { 00:08:10.361 "nqn": "nqn.2016-06.io.spdk:cnode10287", 00:08:10.361 "serial_number": "<<(y=H2S.E>1APH-y9djn" 00:08:10.361 } 00:08:10.361 } 00:08:10.361 Got JSON-RPC error response 00:08:10.361 GoRPCClient: error on JSON-RPC call' 00:08:10.361 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/05/14 22:55:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10287 serial_number:<<(y=H2S.E>1APH-y9djn], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN <<(y=H2S.E>1APH-y9djn 00:08:10.361 request: 00:08:10.361 { 00:08:10.361 "method": "nvmf_create_subsystem", 00:08:10.361 "params": { 00:08:10.361 "nqn": "nqn.2016-06.io.spdk:cnode10287", 00:08:10.361 "serial_number": "<<(y=H2S.E>1APH-y9djn" 00:08:10.361 } 00:08:10.361 } 00:08:10.361 Got JSON-RPC error response 00:08:10.361 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:10.361 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:10.361 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:10.362 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.363 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'R.Q26=u*R?z#KbvW=}I1a1`p~W?,|1w~{iAZQuBt{' 00:08:10.621 22:55:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'R.Q26=u*R?z#KbvW=}I1a1`p~W?,|1w~{iAZQuBt{' nqn.2016-06.io.spdk:cnode29944 00:08:10.878 [2024-05-14 22:55:23.050403] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29944: invalid model number 'R.Q26=u*R?z#KbvW=}I1a1`p~W?,|1w~{iAZQuBt{' 00:08:10.879 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/05/14 22:55:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:R.Q26=u*R?z#KbvW=}I1a1`p~W?,|1w~{iAZQuBt{ nqn:nqn.2016-06.io.spdk:cnode29944], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN R.Q26=u*R?z#KbvW=}I1a1`p~W?,|1w~{iAZQuBt{ 00:08:10.879 request: 00:08:10.879 { 00:08:10.879 "method": "nvmf_create_subsystem", 00:08:10.879 "params": { 00:08:10.879 "nqn": "nqn.2016-06.io.spdk:cnode29944", 00:08:10.879 "model_number": "R.Q26=u*R?z#KbvW=}I1a1`p~W?,|1w~{iAZQuBt{" 00:08:10.879 } 00:08:10.879 } 00:08:10.879 Got JSON-RPC error response 00:08:10.879 GoRPCClient: error on JSON-RPC call' 00:08:10.879 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/05/14 22:55:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:R.Q26=u*R?z#KbvW=}I1a1`p~W?,|1w~{iAZQuBt{ nqn:nqn.2016-06.io.spdk:cnode29944], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN R.Q26=u*R?z#KbvW=}I1a1`p~W?,|1w~{iAZQuBt{ 00:08:10.879 request: 00:08:10.879 { 00:08:10.879 "method": "nvmf_create_subsystem", 00:08:10.879 "params": { 00:08:10.879 "nqn": "nqn.2016-06.io.spdk:cnode29944", 00:08:10.879 "model_number": "R.Q26=u*R?z#KbvW=}I1a1`p~W?,|1w~{iAZQuBt{" 00:08:10.879 } 00:08:10.879 } 00:08:10.879 Got JSON-RPC error response 00:08:10.879 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:10.879 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:11.137 [2024-05-14 22:55:23.298703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.137 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:11.394 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:11.394 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:11.394 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:11.394 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:11.394 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:11.652 [2024-05-14 22:55:23.908831] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:11.652 [2024-05-14 22:55:23.908971] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:11.652 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/05/14 22:55:23 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:08:11.652 request: 00:08:11.652 { 00:08:11.652 "method": "nvmf_subsystem_remove_listener", 00:08:11.652 "params": { 00:08:11.652 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:11.652 "listen_address": { 00:08:11.652 "trtype": "tcp", 00:08:11.652 "traddr": "", 00:08:11.652 "trsvcid": "4421" 00:08:11.652 } 00:08:11.652 } 00:08:11.652 } 00:08:11.652 Got JSON-RPC error response 00:08:11.652 GoRPCClient: error on JSON-RPC call' 00:08:11.652 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/05/14 22:55:23 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:08:11.652 request: 00:08:11.652 { 00:08:11.652 "method": "nvmf_subsystem_remove_listener", 00:08:11.652 "params": { 00:08:11.652 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:11.652 "listen_address": { 00:08:11.652 "trtype": "tcp", 00:08:11.652 "traddr": "", 00:08:11.652 "trsvcid": "4421" 00:08:11.652 } 00:08:11.652 } 00:08:11.652 } 00:08:11.652 Got JSON-RPC error response 00:08:11.652 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:11.652 22:55:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20521 -i 0 00:08:12.218 [2024-05-14 22:55:24.353271] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20521: invalid cntlid range [0-65519] 00:08:12.218 22:55:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/05/14 22:55:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20521], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:08:12.218 request: 00:08:12.218 { 00:08:12.218 "method": "nvmf_create_subsystem", 00:08:12.218 "params": { 00:08:12.218 "nqn": "nqn.2016-06.io.spdk:cnode20521", 00:08:12.218 "min_cntlid": 0 00:08:12.218 } 00:08:12.218 } 00:08:12.218 Got JSON-RPC error response 00:08:12.218 GoRPCClient: error on JSON-RPC call' 00:08:12.218 22:55:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/05/14 22:55:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20521], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:08:12.218 request: 00:08:12.218 { 00:08:12.218 "method": "nvmf_create_subsystem", 00:08:12.218 "params": { 00:08:12.218 "nqn": "nqn.2016-06.io.spdk:cnode20521", 00:08:12.218 "min_cntlid": 0 00:08:12.218 } 00:08:12.218 } 00:08:12.218 Got JSON-RPC error response 00:08:12.218 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:12.218 22:55:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21561 -i 65520 00:08:12.476 [2024-05-14 22:55:24.721700] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21561: invalid cntlid range [65520-65519] 00:08:12.476 22:55:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/05/14 22:55:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21561], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:08:12.476 request: 00:08:12.476 { 00:08:12.476 "method": "nvmf_create_subsystem", 00:08:12.476 "params": { 00:08:12.476 "nqn": "nqn.2016-06.io.spdk:cnode21561", 00:08:12.476 "min_cntlid": 65520 00:08:12.476 } 00:08:12.476 } 00:08:12.476 Got JSON-RPC error response 00:08:12.476 GoRPCClient: error on JSON-RPC call' 00:08:12.476 22:55:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/05/14 22:55:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21561], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:08:12.476 request: 00:08:12.476 { 00:08:12.476 "method": "nvmf_create_subsystem", 00:08:12.476 "params": { 00:08:12.476 "nqn": "nqn.2016-06.io.spdk:cnode21561", 00:08:12.476 "min_cntlid": 65520 00:08:12.476 } 00:08:12.476 } 00:08:12.476 Got JSON-RPC error response 00:08:12.476 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:12.476 22:55:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29816 -I 0 00:08:12.735 [2024-05-14 22:55:25.046013] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29816: invalid cntlid range [1-0] 00:08:12.735 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/05/14 22:55:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode29816], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:08:12.735 request: 00:08:12.735 { 00:08:12.735 "method": "nvmf_create_subsystem", 00:08:12.735 "params": { 00:08:12.735 "nqn": "nqn.2016-06.io.spdk:cnode29816", 00:08:12.735 "max_cntlid": 0 00:08:12.735 } 00:08:12.735 } 00:08:12.735 Got JSON-RPC error response 00:08:12.735 GoRPCClient: error on JSON-RPC call' 00:08:12.735 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/05/14 22:55:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode29816], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:08:12.735 request: 00:08:12.735 { 00:08:12.735 "method": "nvmf_create_subsystem", 00:08:12.735 "params": { 00:08:12.735 "nqn": "nqn.2016-06.io.spdk:cnode29816", 00:08:12.735 "max_cntlid": 0 00:08:12.735 } 00:08:12.735 } 00:08:12.735 Got JSON-RPC error response 00:08:12.735 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:12.735 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17915 -I 65520 00:08:13.302 [2024-05-14 22:55:25.458416] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17915: invalid cntlid range [1-65520] 00:08:13.302 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/05/14 22:55:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode17915], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:08:13.302 request: 00:08:13.302 { 00:08:13.302 "method": "nvmf_create_subsystem", 00:08:13.302 "params": { 00:08:13.302 "nqn": "nqn.2016-06.io.spdk:cnode17915", 00:08:13.302 "max_cntlid": 65520 00:08:13.302 } 00:08:13.302 } 00:08:13.302 Got JSON-RPC error response 00:08:13.302 GoRPCClient: error on JSON-RPC call' 00:08:13.302 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/05/14 22:55:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode17915], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:08:13.302 request: 00:08:13.302 { 00:08:13.303 "method": "nvmf_create_subsystem", 00:08:13.303 "params": { 00:08:13.303 "nqn": "nqn.2016-06.io.spdk:cnode17915", 00:08:13.303 "max_cntlid": 65520 00:08:13.303 } 00:08:13.303 } 00:08:13.303 Got JSON-RPC error response 00:08:13.303 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:13.303 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3547 -i 6 -I 5 00:08:13.561 [2024-05-14 22:55:25.790718] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3547: invalid cntlid range [6-5] 00:08:13.561 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/05/14 22:55:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode3547], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:08:13.561 request: 00:08:13.561 { 00:08:13.561 "method": "nvmf_create_subsystem", 00:08:13.561 "params": { 00:08:13.561 "nqn": "nqn.2016-06.io.spdk:cnode3547", 00:08:13.561 "min_cntlid": 6, 00:08:13.561 "max_cntlid": 5 00:08:13.561 } 00:08:13.561 } 00:08:13.561 Got JSON-RPC error response 00:08:13.561 GoRPCClient: error on JSON-RPC call' 00:08:13.561 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/05/14 22:55:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode3547], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:08:13.561 request: 00:08:13.561 { 00:08:13.561 "method": "nvmf_create_subsystem", 00:08:13.561 "params": { 00:08:13.561 "nqn": "nqn.2016-06.io.spdk:cnode3547", 00:08:13.561 "min_cntlid": 6, 00:08:13.561 "max_cntlid": 5 00:08:13.561 } 00:08:13.561 } 00:08:13.561 Got JSON-RPC error response 00:08:13.561 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:13.561 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:13.820 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:13.820 { 00:08:13.820 "name": "foobar", 00:08:13.820 "method": "nvmf_delete_target", 00:08:13.820 "req_id": 1 00:08:13.820 } 00:08:13.820 Got JSON-RPC error response 00:08:13.820 response: 00:08:13.820 { 00:08:13.820 "code": -32602, 00:08:13.820 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:13.820 }' 00:08:13.820 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:13.820 { 00:08:13.820 "name": "foobar", 00:08:13.820 "method": "nvmf_delete_target", 00:08:13.820 "req_id": 1 00:08:13.820 } 00:08:13.820 Got JSON-RPC error response 00:08:13.820 response: 00:08:13.820 { 00:08:13.820 "code": -32602, 00:08:13.820 "message": "The specified target doesn't exist, cannot delete it." 00:08:13.820 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:13.820 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:13.820 22:55:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:13.820 22:55:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.820 22:55:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:13.820 22:55:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.820 22:55:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:13.820 22:55:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.820 22:55:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.820 rmmod nvme_tcp 00:08:13.820 rmmod nvme_fabrics 00:08:13.820 rmmod nvme_keyring 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 67087 ']' 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 67087 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 67087 ']' 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 67087 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67087 00:08:13.820 killing process with pid 67087 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67087' 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 67087 00:08:13.820 [2024-05-14 22:55:26.049497] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:13.820 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 67087 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:14.080 ************************************ 00:08:14.080 END TEST nvmf_invalid 00:08:14.080 ************************************ 00:08:14.080 00:08:14.080 real 0m6.057s 00:08:14.080 user 0m24.855s 00:08:14.080 sys 0m1.225s 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.080 22:55:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:14.080 22:55:26 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:14.080 22:55:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:14.080 22:55:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.080 22:55:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:14.080 ************************************ 00:08:14.080 START TEST nvmf_abort 00:08:14.080 ************************************ 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:14.080 * Looking for test storage... 00:08:14.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:14.080 Cannot find device "nvmf_tgt_br" 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:08:14.080 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.339 Cannot find device "nvmf_tgt_br2" 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:14.339 Cannot find device "nvmf_tgt_br" 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:14.339 Cannot find device "nvmf_tgt_br2" 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:14.339 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:14.340 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:14.340 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:14.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:08:14.340 00:08:14.340 --- 10.0.0.2 ping statistics --- 00:08:14.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.340 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:14.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:14.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:14.598 00:08:14.598 --- 10.0.0.3 ping statistics --- 00:08:14.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.598 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:14.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:08:14.598 00:08:14.598 --- 10.0.0.1 ping statistics --- 00:08:14.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.598 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=67587 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 67587 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 67587 ']' 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:14.598 22:55:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.598 [2024-05-14 22:55:26.814323] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:08:14.598 [2024-05-14 22:55:26.814414] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.598 [2024-05-14 22:55:26.953122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.857 [2024-05-14 22:55:27.014148] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.857 [2024-05-14 22:55:27.014471] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.857 [2024-05-14 22:55:27.014602] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.857 [2024-05-14 22:55:27.014722] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.857 [2024-05-14 22:55:27.014776] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.857 [2024-05-14 22:55:27.015018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.857 [2024-05-14 22:55:27.015174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.857 [2024-05-14 22:55:27.015187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.423 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:15.423 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:08:15.423 22:55:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.423 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.423 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 [2024-05-14 22:55:27.848313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 Malloc0 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 Delay0 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 [2024-05-14 22:55:27.915162] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:15.681 [2024-05-14 22:55:27.915508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.681 22:55:27 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:15.940 [2024-05-14 22:55:28.095498] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:17.898 Initializing NVMe Controllers 00:08:17.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:17.898 controller IO queue size 128 less than required 00:08:17.898 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:17.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:17.898 Initialization complete. Launching workers. 00:08:17.898 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30102 00:08:17.898 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30163, failed to submit 62 00:08:17.898 success 30106, unsuccess 57, failed 0 00:08:17.898 22:55:30 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:17.899 rmmod nvme_tcp 00:08:17.899 rmmod nvme_fabrics 00:08:17.899 rmmod nvme_keyring 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 67587 ']' 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 67587 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 67587 ']' 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 67587 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67587 00:08:17.899 killing process with pid 67587 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67587' 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 67587 00:08:17.899 [2024-05-14 22:55:30.245360] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:17.899 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 67587 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:18.158 00:08:18.158 real 0m4.178s 00:08:18.158 user 0m12.162s 00:08:18.158 sys 0m0.920s 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.158 22:55:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:18.158 ************************************ 00:08:18.158 END TEST nvmf_abort 00:08:18.158 ************************************ 00:08:18.158 22:55:30 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:18.158 22:55:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:18.158 22:55:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.158 22:55:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.158 ************************************ 00:08:18.158 START TEST nvmf_ns_hotplug_stress 00:08:18.158 ************************************ 00:08:18.158 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:18.417 * Looking for test storage... 00:08:18.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:18.417 Cannot find device "nvmf_tgt_br" 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.417 Cannot find device "nvmf_tgt_br2" 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:18.417 Cannot find device "nvmf_tgt_br" 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:18.417 Cannot find device "nvmf_tgt_br2" 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:08:18.417 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:18.418 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:18.418 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:18.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.418 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:08:18.418 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:18.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.418 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:08:18.418 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:18.418 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:18.418 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:18.675 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:18.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:08:18.676 00:08:18.676 --- 10.0.0.2 ping statistics --- 00:08:18.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.676 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:18.676 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:18.676 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:08:18.676 00:08:18.676 --- 10.0.0.3 ping statistics --- 00:08:18.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.676 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:18.676 22:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:18.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:18.676 00:08:18.676 --- 10.0.0.1 ping statistics --- 00:08:18.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.676 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=67853 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 67853 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 67853 ']' 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:18.676 22:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.934 [2024-05-14 22:55:31.116450] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:08:18.934 [2024-05-14 22:55:31.116620] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.934 [2024-05-14 22:55:31.265117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.192 [2024-05-14 22:55:31.336923] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.192 [2024-05-14 22:55:31.337185] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.192 [2024-05-14 22:55:31.337343] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.192 [2024-05-14 22:55:31.337360] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.192 [2024-05-14 22:55:31.337369] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.192 [2024-05-14 22:55:31.337671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.192 [2024-05-14 22:55:31.338096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.192 [2024-05-14 22:55:31.338111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.758 22:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:19.758 22:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:08:19.758 22:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.758 22:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:19.758 22:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.758 22:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.758 22:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:19.758 22:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:20.324 [2024-05-14 22:55:32.442437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.324 22:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.582 22:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.840 [2024-05-14 22:55:33.026866] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:20.840 [2024-05-14 22:55:33.027590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.841 22:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.099 22:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:21.356 Malloc0 00:08:21.356 22:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:21.615 Delay0 00:08:21.615 22:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.183 22:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:22.183 NULL1 00:08:22.183 22:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:22.750 22:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67990 00:08:22.750 22:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:22.750 22:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:22.750 22:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.722 Read completed with error (sct=0, sc=11) 00:08:23.722 22:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.980 22:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:23.980 22:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:24.238 true 00:08:24.238 22:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:24.238 22:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.173 22:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.431 22:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:25.431 22:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:25.689 true 00:08:25.689 22:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:25.689 22:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.947 22:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.206 22:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:26.206 22:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:26.464 true 00:08:26.464 22:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:26.464 22:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.723 22:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.983 22:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:26.983 22:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:27.241 true 00:08:27.241 22:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:27.241 22:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.176 22:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.434 22:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:28.435 22:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:28.693 true 00:08:28.693 22:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:28.693 22:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.951 22:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.210 22:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:29.210 22:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:29.468 true 00:08:29.468 22:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:29.468 22:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.034 22:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.292 22:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:30.292 22:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:30.550 true 00:08:30.550 22:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:30.550 22:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.865 22:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.144 22:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:31.144 22:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:31.402 true 00:08:31.402 22:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:31.402 22:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.338 22:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.597 22:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:32.597 22:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:32.855 true 00:08:32.855 22:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:32.855 22:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.114 22:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.372 22:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:33.372 22:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:33.630 true 00:08:33.630 22:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:33.630 22:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.888 22:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.146 22:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:34.146 22:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:34.403 true 00:08:34.403 22:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:34.403 22:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.662 22:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.920 22:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:34.920 22:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:35.178 true 00:08:35.436 22:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:35.436 22:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.370 22:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.370 22:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:36.370 22:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:36.628 true 00:08:36.628 22:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:36.628 22:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.897 22:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.167 22:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:37.167 22:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:37.425 true 00:08:37.425 22:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:37.425 22:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.684 22:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.943 22:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:37.943 22:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:38.202 true 00:08:38.202 22:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:38.202 22:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.139 22:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.397 22:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:39.397 22:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:39.656 true 00:08:39.656 22:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:39.656 22:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.915 22:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.174 22:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:40.174 22:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:40.432 true 00:08:40.433 22:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:40.433 22:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.999 22:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.256 22:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:41.256 22:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:41.521 true 00:08:41.521 22:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:41.521 22:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.105 22:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.363 22:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:42.363 22:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:42.621 true 00:08:42.621 22:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:42.621 22:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.187 22:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.445 22:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:43.445 22:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:43.704 true 00:08:43.704 22:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:43.704 22:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.271 22:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.529 22:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:44.529 22:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:44.786 true 00:08:44.786 22:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:44.787 22:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.045 22:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.303 22:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:45.303 22:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:45.562 true 00:08:45.562 22:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:45.562 22:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.821 22:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.079 22:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:46.079 22:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:46.337 true 00:08:46.337 22:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:46.337 22:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.291 22:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.550 22:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:47.550 22:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:47.808 true 00:08:47.808 22:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:47.808 22:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.179 22:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.437 22:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:49.437 22:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:50.003 true 00:08:50.003 22:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:50.003 22:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.569 22:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.827 22:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:50.827 22:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:51.085 true 00:08:51.085 22:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:51.085 22:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.046 22:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.046 22:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:52.046 22:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:52.611 true 00:08:52.611 22:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:52.611 22:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.177 Initializing NVMe Controllers 00:08:53.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:53.177 Controller IO queue size 128, less than required. 00:08:53.177 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:53.177 Controller IO queue size 128, less than required. 00:08:53.177 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:53.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:53.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:53.177 Initialization complete. Launching workers. 00:08:53.177 ======================================================== 00:08:53.177 Latency(us) 00:08:53.177 Device Information : IOPS MiB/s Average min max 00:08:53.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 958.37 0.47 61233.40 3625.06 1115328.53 00:08:53.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8184.37 4.00 15639.12 3231.01 664862.67 00:08:53.177 ======================================================== 00:08:53.177 Total : 9142.73 4.46 20418.44 3231.01 1115328.53 00:08:53.177 00:08:53.177 22:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.435 22:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:53.435 22:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:53.693 true 00:08:53.693 22:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 67990 00:08:53.693 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67990) - No such process 00:08:53.693 22:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 67990 00:08:53.693 22:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.951 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:54.209 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:54.209 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:54.209 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:54.209 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:54.209 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:54.467 null0 00:08:54.467 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:54.467 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:54.467 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:54.726 null1 00:08:54.726 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:54.726 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:54.726 22:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:54.984 null2 00:08:54.984 22:56:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:54.984 22:56:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:54.984 22:56:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:55.550 null3 00:08:55.550 22:56:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:55.550 22:56:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:55.550 22:56:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:55.808 null4 00:08:55.808 22:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:55.808 22:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:55.808 22:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:56.067 null5 00:08:56.067 22:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:56.067 22:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:56.067 22:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:56.639 null6 00:08:56.639 22:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:56.639 22:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:56.639 22:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:56.639 null7 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.639 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:56.898 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69013 69014 69015 69018 69020 69022 69024 69025 00:08:57.156 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:57.156 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:57.156 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:57.156 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.156 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:57.156 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:57.414 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:57.414 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:57.414 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.414 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.414 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.672 22:56:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:57.931 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.931 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.931 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:57.931 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.931 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.931 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:57.931 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:58.189 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.189 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:58.189 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:58.189 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:58.189 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:58.189 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.447 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:58.705 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.705 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.705 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:58.705 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.705 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.705 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:58.705 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.705 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.705 22:56:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:58.705 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.705 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.705 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:58.705 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.964 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:58.964 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:58.964 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:58.964 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:58.964 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.222 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.222 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:59.222 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.222 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.222 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:59.222 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.222 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.222 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:59.480 22:56:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.738 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.738 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.738 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:59.738 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.738 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:59.996 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:59.996 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.996 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:59.996 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.996 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.996 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:59.996 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.254 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:00.513 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.772 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:00.772 22:56:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:00.772 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:00.772 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.772 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.772 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:00.772 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:01.030 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.030 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.030 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:01.030 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.030 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.030 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:01.031 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.031 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.031 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.330 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:01.598 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:01.598 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:01.598 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:01.598 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.598 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.598 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:01.598 22:56:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.857 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:02.115 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.373 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:02.373 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:02.373 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:02.373 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.631 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.631 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.631 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:02.631 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:02.631 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:02.631 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.631 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.631 22:56:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.889 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:03.147 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:03.147 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.147 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.147 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:03.147 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.147 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.147 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:03.405 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:03.405 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:03.405 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:03.405 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.405 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.405 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:03.405 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:03.405 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.405 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.405 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:03.663 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:03.663 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:03.663 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.663 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.663 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:03.663 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.663 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.663 22:56:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:03.663 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.663 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.663 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:03.920 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.921 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:04.178 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:04.178 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:04.436 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:04.437 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.437 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.437 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:04.437 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.437 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.437 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:04.437 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.437 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.437 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.437 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:04.695 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:04.695 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.695 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.695 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:04.695 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.695 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.695 22:56:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:04.695 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.695 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.695 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:04.953 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:04.954 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.954 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.954 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:04.954 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:04.954 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.954 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.954 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.954 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:05.212 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:05.212 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:05.212 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:05.212 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.212 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.212 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.212 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.212 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:05.470 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:05.470 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:05.470 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.470 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.470 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:05.470 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.470 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.470 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.470 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.727 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.727 22:56:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.727 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.727 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.727 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.728 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.985 rmmod nvme_tcp 00:09:05.985 rmmod nvme_fabrics 00:09:05.985 rmmod nvme_keyring 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 67853 ']' 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 67853 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 67853 ']' 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 67853 00:09:05.985 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:09:05.986 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:05.986 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67853 00:09:05.986 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:05.986 killing process with pid 67853 00:09:05.986 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:05.986 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67853' 00:09:05.986 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 67853 00:09:05.986 [2024-05-14 22:56:18.267061] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:05.986 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 67853 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:06.244 00:09:06.244 real 0m47.999s 00:09:06.244 user 4m5.386s 00:09:06.244 sys 0m15.390s 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:06.244 22:56:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.244 ************************************ 00:09:06.244 END TEST nvmf_ns_hotplug_stress 00:09:06.244 ************************************ 00:09:06.244 22:56:18 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:06.244 22:56:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:06.244 22:56:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:06.244 22:56:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.244 ************************************ 00:09:06.244 START TEST nvmf_connect_stress 00:09:06.244 ************************************ 00:09:06.244 22:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:06.503 * Looking for test storage... 00:09:06.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:06.503 Cannot find device "nvmf_tgt_br" 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:06.503 Cannot find device "nvmf_tgt_br2" 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:06.503 Cannot find device "nvmf_tgt_br" 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:06.503 Cannot find device "nvmf_tgt_br2" 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:06.503 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:06.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:09:06.763 00:09:06.763 --- 10.0.0.2 ping statistics --- 00:09:06.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.763 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:09:06.763 22:56:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:06.763 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:06.763 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:09:06.763 00:09:06.763 --- 10.0.0.3 ping statistics --- 00:09:06.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.763 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:06.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:06.763 00:09:06.763 --- 10.0.0.1 ping statistics --- 00:09:06.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.763 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=70379 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 70379 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 70379 ']' 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:06.763 22:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.763 [2024-05-14 22:56:19.078691] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:06.763 [2024-05-14 22:56:19.079244] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.022 [2024-05-14 22:56:19.215794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:07.022 [2024-05-14 22:56:19.275626] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.022 [2024-05-14 22:56:19.275911] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.022 [2024-05-14 22:56:19.276051] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.022 [2024-05-14 22:56:19.276182] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.022 [2024-05-14 22:56:19.276218] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.022 [2024-05-14 22:56:19.276436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.022 [2024-05-14 22:56:19.276609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.022 [2024-05-14 22:56:19.276518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.957 [2024-05-14 22:56:20.090037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.957 [2024-05-14 22:56:20.107266] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:07.957 [2024-05-14 22:56:20.107516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.957 NULL1 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=70431 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.957 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.215 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.216 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:08.216 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.216 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.216 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.473 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.473 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:08.473 22:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.473 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.473 22:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.040 22:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.040 22:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:09.040 22:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.040 22:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.040 22:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.298 22:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.298 22:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:09.298 22:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.298 22:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.298 22:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.555 22:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.555 22:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:09.555 22:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.555 22:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.555 22:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.812 22:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.812 22:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:09.812 22:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.812 22:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.812 22:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.069 22:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.069 22:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:10.069 22:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.069 22:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.069 22:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.633 22:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.633 22:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:10.633 22:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.633 22:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.633 22:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.891 22:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.891 22:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:10.891 22:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.891 22:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.891 22:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.149 22:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.149 22:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:11.149 22:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.149 22:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.149 22:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.407 22:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.407 22:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:11.407 22:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.407 22:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.407 22:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.972 22:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.972 22:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:11.972 22:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.972 22:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.972 22:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.230 22:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.230 22:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:12.230 22:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.230 22:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.230 22:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.488 22:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.488 22:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:12.488 22:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.488 22:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.488 22:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.747 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.747 22:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:12.747 22:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.747 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.747 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.005 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.005 22:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:13.006 22:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.006 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.006 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.572 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.572 22:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:13.572 22:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.572 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.572 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.830 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.830 22:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:13.830 22:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.830 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.830 22:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.089 22:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.089 22:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:14.089 22:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.089 22:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.089 22:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.347 22:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.347 22:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:14.347 22:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.347 22:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.347 22:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.605 22:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.605 22:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:14.605 22:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.605 22:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.605 22:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.172 22:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.172 22:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:15.172 22:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.172 22:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.172 22:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.431 22:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.431 22:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:15.431 22:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.431 22:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.431 22:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.689 22:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.689 22:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:15.689 22:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.689 22:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.689 22:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:15.947 22:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.947 22:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:15.947 22:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.947 22:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.947 22:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.206 22:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.206 22:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:16.206 22:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.206 22:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.206 22:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.814 22:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.814 22:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:16.814 22:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.814 22:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.814 22:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.072 22:56:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.072 22:56:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:17.072 22:56:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.072 22:56:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.072 22:56:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.330 22:56:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.330 22:56:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:17.330 22:56:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.331 22:56:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.331 22:56:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.589 22:56:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.589 22:56:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:17.589 22:56:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.589 22:56:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.589 22:56:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.847 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.847 22:56:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:17.847 22:56:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.847 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.847 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.105 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:18.105 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.105 22:56:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70431 00:09:18.105 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (70431) - No such process 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 70431 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.363 rmmod nvme_tcp 00:09:18.363 rmmod nvme_fabrics 00:09:18.363 rmmod nvme_keyring 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 70379 ']' 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 70379 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 70379 ']' 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 70379 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70379 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70379' 00:09:18.363 killing process with pid 70379 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 70379 00:09:18.363 [2024-05-14 22:56:30.608380] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:18.363 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 70379 00:09:18.622 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.622 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.622 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.622 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.623 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.623 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.623 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.623 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.623 22:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:18.623 00:09:18.623 real 0m12.236s 00:09:18.623 user 0m40.794s 00:09:18.623 sys 0m3.336s 00:09:18.623 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.623 ************************************ 00:09:18.623 END TEST nvmf_connect_stress 00:09:18.623 ************************************ 00:09:18.623 22:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.623 22:56:30 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:18.623 22:56:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:18.623 22:56:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:18.623 22:56:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.623 ************************************ 00:09:18.623 START TEST nvmf_fused_ordering 00:09:18.623 ************************************ 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:18.623 * Looking for test storage... 00:09:18.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:18.623 Cannot find device "nvmf_tgt_br" 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:09:18.623 22:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:18.623 Cannot find device "nvmf_tgt_br2" 00:09:18.623 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:09:18.623 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:18.882 Cannot find device "nvmf_tgt_br" 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:18.882 Cannot find device "nvmf_tgt_br2" 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:18.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:18.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:18.882 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:19.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:09:19.141 00:09:19.141 --- 10.0.0.2 ping statistics --- 00:09:19.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.141 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:19.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:19.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:19.141 00:09:19.141 --- 10.0.0.3 ping statistics --- 00:09:19.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.141 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:19.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:09:19.141 00:09:19.141 --- 10.0.0.1 ping statistics --- 00:09:19.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.141 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=70754 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 70754 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 70754 ']' 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:19.141 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:19.141 [2024-05-14 22:56:31.395035] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:19.142 [2024-05-14 22:56:31.395174] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.400 [2024-05-14 22:56:31.537728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.400 [2024-05-14 22:56:31.596987] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.400 [2024-05-14 22:56:31.597046] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.400 [2024-05-14 22:56:31.597058] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.400 [2024-05-14 22:56:31.597067] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.400 [2024-05-14 22:56:31.597074] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.400 [2024-05-14 22:56:31.597109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 [2024-05-14 22:56:31.717091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 [2024-05-14 22:56:31.733035] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:19.400 [2024-05-14 22:56:31.733299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 NULL1 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.400 22:56:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:19.400 [2024-05-14 22:56:31.788473] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:19.400 [2024-05-14 22:56:31.788531] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70789 ] 00:09:19.966 Attached to nqn.2016-06.io.spdk:cnode1 00:09:19.966 Namespace ID: 1 size: 1GB 00:09:19.966 fused_ordering(0) 00:09:19.966 fused_ordering(1) 00:09:19.966 fused_ordering(2) 00:09:19.966 fused_ordering(3) 00:09:19.966 fused_ordering(4) 00:09:19.966 fused_ordering(5) 00:09:19.966 fused_ordering(6) 00:09:19.966 fused_ordering(7) 00:09:19.966 fused_ordering(8) 00:09:19.966 fused_ordering(9) 00:09:19.966 fused_ordering(10) 00:09:19.966 fused_ordering(11) 00:09:19.966 fused_ordering(12) 00:09:19.966 fused_ordering(13) 00:09:19.966 fused_ordering(14) 00:09:19.966 fused_ordering(15) 00:09:19.966 fused_ordering(16) 00:09:19.966 fused_ordering(17) 00:09:19.966 fused_ordering(18) 00:09:19.966 fused_ordering(19) 00:09:19.966 fused_ordering(20) 00:09:19.966 fused_ordering(21) 00:09:19.966 fused_ordering(22) 00:09:19.966 fused_ordering(23) 00:09:19.966 fused_ordering(24) 00:09:19.966 fused_ordering(25) 00:09:19.966 fused_ordering(26) 00:09:19.966 fused_ordering(27) 00:09:19.966 fused_ordering(28) 00:09:19.966 fused_ordering(29) 00:09:19.966 fused_ordering(30) 00:09:19.966 fused_ordering(31) 00:09:19.966 fused_ordering(32) 00:09:19.966 fused_ordering(33) 00:09:19.966 fused_ordering(34) 00:09:19.966 fused_ordering(35) 00:09:19.966 fused_ordering(36) 00:09:19.966 fused_ordering(37) 00:09:19.966 fused_ordering(38) 00:09:19.966 fused_ordering(39) 00:09:19.966 fused_ordering(40) 00:09:19.966 fused_ordering(41) 00:09:19.966 fused_ordering(42) 00:09:19.966 fused_ordering(43) 00:09:19.966 fused_ordering(44) 00:09:19.966 fused_ordering(45) 00:09:19.966 fused_ordering(46) 00:09:19.966 fused_ordering(47) 00:09:19.966 fused_ordering(48) 00:09:19.966 fused_ordering(49) 00:09:19.966 fused_ordering(50) 00:09:19.966 fused_ordering(51) 00:09:19.966 fused_ordering(52) 00:09:19.966 fused_ordering(53) 00:09:19.966 fused_ordering(54) 00:09:19.966 fused_ordering(55) 00:09:19.966 fused_ordering(56) 00:09:19.966 fused_ordering(57) 00:09:19.966 fused_ordering(58) 00:09:19.966 fused_ordering(59) 00:09:19.966 fused_ordering(60) 00:09:19.966 fused_ordering(61) 00:09:19.966 fused_ordering(62) 00:09:19.966 fused_ordering(63) 00:09:19.966 fused_ordering(64) 00:09:19.966 fused_ordering(65) 00:09:19.966 fused_ordering(66) 00:09:19.966 fused_ordering(67) 00:09:19.966 fused_ordering(68) 00:09:19.966 fused_ordering(69) 00:09:19.966 fused_ordering(70) 00:09:19.966 fused_ordering(71) 00:09:19.966 fused_ordering(72) 00:09:19.966 fused_ordering(73) 00:09:19.966 fused_ordering(74) 00:09:19.966 fused_ordering(75) 00:09:19.966 fused_ordering(76) 00:09:19.966 fused_ordering(77) 00:09:19.966 fused_ordering(78) 00:09:19.966 fused_ordering(79) 00:09:19.966 fused_ordering(80) 00:09:19.966 fused_ordering(81) 00:09:19.966 fused_ordering(82) 00:09:19.966 fused_ordering(83) 00:09:19.966 fused_ordering(84) 00:09:19.966 fused_ordering(85) 00:09:19.966 fused_ordering(86) 00:09:19.966 fused_ordering(87) 00:09:19.966 fused_ordering(88) 00:09:19.966 fused_ordering(89) 00:09:19.966 fused_ordering(90) 00:09:19.966 fused_ordering(91) 00:09:19.966 fused_ordering(92) 00:09:19.966 fused_ordering(93) 00:09:19.966 fused_ordering(94) 00:09:19.966 fused_ordering(95) 00:09:19.966 fused_ordering(96) 00:09:19.966 fused_ordering(97) 00:09:19.966 fused_ordering(98) 00:09:19.966 fused_ordering(99) 00:09:19.966 fused_ordering(100) 00:09:19.966 fused_ordering(101) 00:09:19.966 fused_ordering(102) 00:09:19.966 fused_ordering(103) 00:09:19.966 fused_ordering(104) 00:09:19.966 fused_ordering(105) 00:09:19.966 fused_ordering(106) 00:09:19.966 fused_ordering(107) 00:09:19.966 fused_ordering(108) 00:09:19.966 fused_ordering(109) 00:09:19.966 fused_ordering(110) 00:09:19.966 fused_ordering(111) 00:09:19.966 fused_ordering(112) 00:09:19.966 fused_ordering(113) 00:09:19.966 fused_ordering(114) 00:09:19.966 fused_ordering(115) 00:09:19.966 fused_ordering(116) 00:09:19.966 fused_ordering(117) 00:09:19.966 fused_ordering(118) 00:09:19.966 fused_ordering(119) 00:09:19.966 fused_ordering(120) 00:09:19.966 fused_ordering(121) 00:09:19.966 fused_ordering(122) 00:09:19.966 fused_ordering(123) 00:09:19.966 fused_ordering(124) 00:09:19.966 fused_ordering(125) 00:09:19.966 fused_ordering(126) 00:09:19.966 fused_ordering(127) 00:09:19.966 fused_ordering(128) 00:09:19.966 fused_ordering(129) 00:09:19.966 fused_ordering(130) 00:09:19.966 fused_ordering(131) 00:09:19.966 fused_ordering(132) 00:09:19.966 fused_ordering(133) 00:09:19.966 fused_ordering(134) 00:09:19.966 fused_ordering(135) 00:09:19.966 fused_ordering(136) 00:09:19.966 fused_ordering(137) 00:09:19.966 fused_ordering(138) 00:09:19.966 fused_ordering(139) 00:09:19.966 fused_ordering(140) 00:09:19.966 fused_ordering(141) 00:09:19.966 fused_ordering(142) 00:09:19.966 fused_ordering(143) 00:09:19.966 fused_ordering(144) 00:09:19.966 fused_ordering(145) 00:09:19.966 fused_ordering(146) 00:09:19.966 fused_ordering(147) 00:09:19.966 fused_ordering(148) 00:09:19.966 fused_ordering(149) 00:09:19.966 fused_ordering(150) 00:09:19.966 fused_ordering(151) 00:09:19.966 fused_ordering(152) 00:09:19.966 fused_ordering(153) 00:09:19.966 fused_ordering(154) 00:09:19.966 fused_ordering(155) 00:09:19.966 fused_ordering(156) 00:09:19.966 fused_ordering(157) 00:09:19.966 fused_ordering(158) 00:09:19.966 fused_ordering(159) 00:09:19.966 fused_ordering(160) 00:09:19.966 fused_ordering(161) 00:09:19.966 fused_ordering(162) 00:09:19.966 fused_ordering(163) 00:09:19.966 fused_ordering(164) 00:09:19.966 fused_ordering(165) 00:09:19.966 fused_ordering(166) 00:09:19.966 fused_ordering(167) 00:09:19.966 fused_ordering(168) 00:09:19.967 fused_ordering(169) 00:09:19.967 fused_ordering(170) 00:09:19.967 fused_ordering(171) 00:09:19.967 fused_ordering(172) 00:09:19.967 fused_ordering(173) 00:09:19.967 fused_ordering(174) 00:09:19.967 fused_ordering(175) 00:09:19.967 fused_ordering(176) 00:09:19.967 fused_ordering(177) 00:09:19.967 fused_ordering(178) 00:09:19.967 fused_ordering(179) 00:09:19.967 fused_ordering(180) 00:09:19.967 fused_ordering(181) 00:09:19.967 fused_ordering(182) 00:09:19.967 fused_ordering(183) 00:09:19.967 fused_ordering(184) 00:09:19.967 fused_ordering(185) 00:09:19.967 fused_ordering(186) 00:09:19.967 fused_ordering(187) 00:09:19.967 fused_ordering(188) 00:09:19.967 fused_ordering(189) 00:09:19.967 fused_ordering(190) 00:09:19.967 fused_ordering(191) 00:09:19.967 fused_ordering(192) 00:09:19.967 fused_ordering(193) 00:09:19.967 fused_ordering(194) 00:09:19.967 fused_ordering(195) 00:09:19.967 fused_ordering(196) 00:09:19.967 fused_ordering(197) 00:09:19.967 fused_ordering(198) 00:09:19.967 fused_ordering(199) 00:09:19.967 fused_ordering(200) 00:09:19.967 fused_ordering(201) 00:09:19.967 fused_ordering(202) 00:09:19.967 fused_ordering(203) 00:09:19.967 fused_ordering(204) 00:09:19.967 fused_ordering(205) 00:09:20.225 fused_ordering(206) 00:09:20.225 fused_ordering(207) 00:09:20.225 fused_ordering(208) 00:09:20.225 fused_ordering(209) 00:09:20.225 fused_ordering(210) 00:09:20.225 fused_ordering(211) 00:09:20.225 fused_ordering(212) 00:09:20.225 fused_ordering(213) 00:09:20.225 fused_ordering(214) 00:09:20.225 fused_ordering(215) 00:09:20.225 fused_ordering(216) 00:09:20.225 fused_ordering(217) 00:09:20.225 fused_ordering(218) 00:09:20.225 fused_ordering(219) 00:09:20.225 fused_ordering(220) 00:09:20.225 fused_ordering(221) 00:09:20.225 fused_ordering(222) 00:09:20.225 fused_ordering(223) 00:09:20.225 fused_ordering(224) 00:09:20.225 fused_ordering(225) 00:09:20.225 fused_ordering(226) 00:09:20.225 fused_ordering(227) 00:09:20.225 fused_ordering(228) 00:09:20.225 fused_ordering(229) 00:09:20.225 fused_ordering(230) 00:09:20.225 fused_ordering(231) 00:09:20.225 fused_ordering(232) 00:09:20.225 fused_ordering(233) 00:09:20.225 fused_ordering(234) 00:09:20.225 fused_ordering(235) 00:09:20.225 fused_ordering(236) 00:09:20.225 fused_ordering(237) 00:09:20.225 fused_ordering(238) 00:09:20.225 fused_ordering(239) 00:09:20.225 fused_ordering(240) 00:09:20.225 fused_ordering(241) 00:09:20.226 fused_ordering(242) 00:09:20.226 fused_ordering(243) 00:09:20.226 fused_ordering(244) 00:09:20.226 fused_ordering(245) 00:09:20.226 fused_ordering(246) 00:09:20.226 fused_ordering(247) 00:09:20.226 fused_ordering(248) 00:09:20.226 fused_ordering(249) 00:09:20.226 fused_ordering(250) 00:09:20.226 fused_ordering(251) 00:09:20.226 fused_ordering(252) 00:09:20.226 fused_ordering(253) 00:09:20.226 fused_ordering(254) 00:09:20.226 fused_ordering(255) 00:09:20.226 fused_ordering(256) 00:09:20.226 fused_ordering(257) 00:09:20.226 fused_ordering(258) 00:09:20.226 fused_ordering(259) 00:09:20.226 fused_ordering(260) 00:09:20.226 fused_ordering(261) 00:09:20.226 fused_ordering(262) 00:09:20.226 fused_ordering(263) 00:09:20.226 fused_ordering(264) 00:09:20.226 fused_ordering(265) 00:09:20.226 fused_ordering(266) 00:09:20.226 fused_ordering(267) 00:09:20.226 fused_ordering(268) 00:09:20.226 fused_ordering(269) 00:09:20.226 fused_ordering(270) 00:09:20.226 fused_ordering(271) 00:09:20.226 fused_ordering(272) 00:09:20.226 fused_ordering(273) 00:09:20.226 fused_ordering(274) 00:09:20.226 fused_ordering(275) 00:09:20.226 fused_ordering(276) 00:09:20.226 fused_ordering(277) 00:09:20.226 fused_ordering(278) 00:09:20.226 fused_ordering(279) 00:09:20.226 fused_ordering(280) 00:09:20.226 fused_ordering(281) 00:09:20.226 fused_ordering(282) 00:09:20.226 fused_ordering(283) 00:09:20.226 fused_ordering(284) 00:09:20.226 fused_ordering(285) 00:09:20.226 fused_ordering(286) 00:09:20.226 fused_ordering(287) 00:09:20.226 fused_ordering(288) 00:09:20.226 fused_ordering(289) 00:09:20.226 fused_ordering(290) 00:09:20.226 fused_ordering(291) 00:09:20.226 fused_ordering(292) 00:09:20.226 fused_ordering(293) 00:09:20.226 fused_ordering(294) 00:09:20.226 fused_ordering(295) 00:09:20.226 fused_ordering(296) 00:09:20.226 fused_ordering(297) 00:09:20.226 fused_ordering(298) 00:09:20.226 fused_ordering(299) 00:09:20.226 fused_ordering(300) 00:09:20.226 fused_ordering(301) 00:09:20.226 fused_ordering(302) 00:09:20.226 fused_ordering(303) 00:09:20.226 fused_ordering(304) 00:09:20.226 fused_ordering(305) 00:09:20.226 fused_ordering(306) 00:09:20.226 fused_ordering(307) 00:09:20.226 fused_ordering(308) 00:09:20.226 fused_ordering(309) 00:09:20.226 fused_ordering(310) 00:09:20.226 fused_ordering(311) 00:09:20.226 fused_ordering(312) 00:09:20.226 fused_ordering(313) 00:09:20.226 fused_ordering(314) 00:09:20.226 fused_ordering(315) 00:09:20.226 fused_ordering(316) 00:09:20.226 fused_ordering(317) 00:09:20.226 fused_ordering(318) 00:09:20.226 fused_ordering(319) 00:09:20.226 fused_ordering(320) 00:09:20.226 fused_ordering(321) 00:09:20.226 fused_ordering(322) 00:09:20.226 fused_ordering(323) 00:09:20.226 fused_ordering(324) 00:09:20.226 fused_ordering(325) 00:09:20.226 fused_ordering(326) 00:09:20.226 fused_ordering(327) 00:09:20.226 fused_ordering(328) 00:09:20.226 fused_ordering(329) 00:09:20.226 fused_ordering(330) 00:09:20.226 fused_ordering(331) 00:09:20.226 fused_ordering(332) 00:09:20.226 fused_ordering(333) 00:09:20.226 fused_ordering(334) 00:09:20.226 fused_ordering(335) 00:09:20.226 fused_ordering(336) 00:09:20.226 fused_ordering(337) 00:09:20.226 fused_ordering(338) 00:09:20.226 fused_ordering(339) 00:09:20.226 fused_ordering(340) 00:09:20.226 fused_ordering(341) 00:09:20.226 fused_ordering(342) 00:09:20.226 fused_ordering(343) 00:09:20.226 fused_ordering(344) 00:09:20.226 fused_ordering(345) 00:09:20.226 fused_ordering(346) 00:09:20.226 fused_ordering(347) 00:09:20.226 fused_ordering(348) 00:09:20.226 fused_ordering(349) 00:09:20.226 fused_ordering(350) 00:09:20.226 fused_ordering(351) 00:09:20.226 fused_ordering(352) 00:09:20.226 fused_ordering(353) 00:09:20.226 fused_ordering(354) 00:09:20.226 fused_ordering(355) 00:09:20.226 fused_ordering(356) 00:09:20.226 fused_ordering(357) 00:09:20.226 fused_ordering(358) 00:09:20.226 fused_ordering(359) 00:09:20.226 fused_ordering(360) 00:09:20.226 fused_ordering(361) 00:09:20.226 fused_ordering(362) 00:09:20.226 fused_ordering(363) 00:09:20.226 fused_ordering(364) 00:09:20.226 fused_ordering(365) 00:09:20.226 fused_ordering(366) 00:09:20.226 fused_ordering(367) 00:09:20.226 fused_ordering(368) 00:09:20.226 fused_ordering(369) 00:09:20.226 fused_ordering(370) 00:09:20.226 fused_ordering(371) 00:09:20.226 fused_ordering(372) 00:09:20.226 fused_ordering(373) 00:09:20.226 fused_ordering(374) 00:09:20.226 fused_ordering(375) 00:09:20.226 fused_ordering(376) 00:09:20.226 fused_ordering(377) 00:09:20.226 fused_ordering(378) 00:09:20.226 fused_ordering(379) 00:09:20.226 fused_ordering(380) 00:09:20.226 fused_ordering(381) 00:09:20.226 fused_ordering(382) 00:09:20.226 fused_ordering(383) 00:09:20.226 fused_ordering(384) 00:09:20.226 fused_ordering(385) 00:09:20.226 fused_ordering(386) 00:09:20.226 fused_ordering(387) 00:09:20.226 fused_ordering(388) 00:09:20.226 fused_ordering(389) 00:09:20.226 fused_ordering(390) 00:09:20.226 fused_ordering(391) 00:09:20.226 fused_ordering(392) 00:09:20.226 fused_ordering(393) 00:09:20.226 fused_ordering(394) 00:09:20.226 fused_ordering(395) 00:09:20.226 fused_ordering(396) 00:09:20.226 fused_ordering(397) 00:09:20.226 fused_ordering(398) 00:09:20.226 fused_ordering(399) 00:09:20.226 fused_ordering(400) 00:09:20.226 fused_ordering(401) 00:09:20.226 fused_ordering(402) 00:09:20.226 fused_ordering(403) 00:09:20.226 fused_ordering(404) 00:09:20.226 fused_ordering(405) 00:09:20.226 fused_ordering(406) 00:09:20.226 fused_ordering(407) 00:09:20.226 fused_ordering(408) 00:09:20.226 fused_ordering(409) 00:09:20.226 fused_ordering(410) 00:09:20.793 fused_ordering(411) 00:09:20.793 fused_ordering(412) 00:09:20.793 fused_ordering(413) 00:09:20.793 fused_ordering(414) 00:09:20.793 fused_ordering(415) 00:09:20.793 fused_ordering(416) 00:09:20.793 fused_ordering(417) 00:09:20.793 fused_ordering(418) 00:09:20.793 fused_ordering(419) 00:09:20.793 fused_ordering(420) 00:09:20.793 fused_ordering(421) 00:09:20.793 fused_ordering(422) 00:09:20.793 fused_ordering(423) 00:09:20.793 fused_ordering(424) 00:09:20.793 fused_ordering(425) 00:09:20.793 fused_ordering(426) 00:09:20.793 fused_ordering(427) 00:09:20.793 fused_ordering(428) 00:09:20.793 fused_ordering(429) 00:09:20.793 fused_ordering(430) 00:09:20.793 fused_ordering(431) 00:09:20.793 fused_ordering(432) 00:09:20.793 fused_ordering(433) 00:09:20.793 fused_ordering(434) 00:09:20.793 fused_ordering(435) 00:09:20.793 fused_ordering(436) 00:09:20.793 fused_ordering(437) 00:09:20.793 fused_ordering(438) 00:09:20.793 fused_ordering(439) 00:09:20.793 fused_ordering(440) 00:09:20.793 fused_ordering(441) 00:09:20.793 fused_ordering(442) 00:09:20.793 fused_ordering(443) 00:09:20.793 fused_ordering(444) 00:09:20.793 fused_ordering(445) 00:09:20.793 fused_ordering(446) 00:09:20.793 fused_ordering(447) 00:09:20.793 fused_ordering(448) 00:09:20.793 fused_ordering(449) 00:09:20.793 fused_ordering(450) 00:09:20.793 fused_ordering(451) 00:09:20.793 fused_ordering(452) 00:09:20.793 fused_ordering(453) 00:09:20.793 fused_ordering(454) 00:09:20.793 fused_ordering(455) 00:09:20.793 fused_ordering(456) 00:09:20.793 fused_ordering(457) 00:09:20.793 fused_ordering(458) 00:09:20.793 fused_ordering(459) 00:09:20.793 fused_ordering(460) 00:09:20.793 fused_ordering(461) 00:09:20.793 fused_ordering(462) 00:09:20.793 fused_ordering(463) 00:09:20.793 fused_ordering(464) 00:09:20.793 fused_ordering(465) 00:09:20.793 fused_ordering(466) 00:09:20.793 fused_ordering(467) 00:09:20.793 fused_ordering(468) 00:09:20.793 fused_ordering(469) 00:09:20.793 fused_ordering(470) 00:09:20.793 fused_ordering(471) 00:09:20.793 fused_ordering(472) 00:09:20.793 fused_ordering(473) 00:09:20.793 fused_ordering(474) 00:09:20.793 fused_ordering(475) 00:09:20.793 fused_ordering(476) 00:09:20.793 fused_ordering(477) 00:09:20.793 fused_ordering(478) 00:09:20.793 fused_ordering(479) 00:09:20.793 fused_ordering(480) 00:09:20.793 fused_ordering(481) 00:09:20.793 fused_ordering(482) 00:09:20.793 fused_ordering(483) 00:09:20.793 fused_ordering(484) 00:09:20.793 fused_ordering(485) 00:09:20.793 fused_ordering(486) 00:09:20.793 fused_ordering(487) 00:09:20.793 fused_ordering(488) 00:09:20.793 fused_ordering(489) 00:09:20.793 fused_ordering(490) 00:09:20.793 fused_ordering(491) 00:09:20.793 fused_ordering(492) 00:09:20.793 fused_ordering(493) 00:09:20.793 fused_ordering(494) 00:09:20.793 fused_ordering(495) 00:09:20.793 fused_ordering(496) 00:09:20.793 fused_ordering(497) 00:09:20.793 fused_ordering(498) 00:09:20.793 fused_ordering(499) 00:09:20.793 fused_ordering(500) 00:09:20.793 fused_ordering(501) 00:09:20.793 fused_ordering(502) 00:09:20.793 fused_ordering(503) 00:09:20.793 fused_ordering(504) 00:09:20.793 fused_ordering(505) 00:09:20.793 fused_ordering(506) 00:09:20.793 fused_ordering(507) 00:09:20.793 fused_ordering(508) 00:09:20.793 fused_ordering(509) 00:09:20.793 fused_ordering(510) 00:09:20.793 fused_ordering(511) 00:09:20.793 fused_ordering(512) 00:09:20.793 fused_ordering(513) 00:09:20.793 fused_ordering(514) 00:09:20.793 fused_ordering(515) 00:09:20.793 fused_ordering(516) 00:09:20.793 fused_ordering(517) 00:09:20.793 fused_ordering(518) 00:09:20.793 fused_ordering(519) 00:09:20.793 fused_ordering(520) 00:09:20.793 fused_ordering(521) 00:09:20.793 fused_ordering(522) 00:09:20.793 fused_ordering(523) 00:09:20.793 fused_ordering(524) 00:09:20.793 fused_ordering(525) 00:09:20.793 fused_ordering(526) 00:09:20.793 fused_ordering(527) 00:09:20.793 fused_ordering(528) 00:09:20.793 fused_ordering(529) 00:09:20.793 fused_ordering(530) 00:09:20.793 fused_ordering(531) 00:09:20.793 fused_ordering(532) 00:09:20.793 fused_ordering(533) 00:09:20.793 fused_ordering(534) 00:09:20.793 fused_ordering(535) 00:09:20.793 fused_ordering(536) 00:09:20.793 fused_ordering(537) 00:09:20.793 fused_ordering(538) 00:09:20.793 fused_ordering(539) 00:09:20.793 fused_ordering(540) 00:09:20.793 fused_ordering(541) 00:09:20.793 fused_ordering(542) 00:09:20.793 fused_ordering(543) 00:09:20.793 fused_ordering(544) 00:09:20.793 fused_ordering(545) 00:09:20.793 fused_ordering(546) 00:09:20.793 fused_ordering(547) 00:09:20.793 fused_ordering(548) 00:09:20.793 fused_ordering(549) 00:09:20.793 fused_ordering(550) 00:09:20.793 fused_ordering(551) 00:09:20.793 fused_ordering(552) 00:09:20.793 fused_ordering(553) 00:09:20.793 fused_ordering(554) 00:09:20.793 fused_ordering(555) 00:09:20.793 fused_ordering(556) 00:09:20.793 fused_ordering(557) 00:09:20.793 fused_ordering(558) 00:09:20.793 fused_ordering(559) 00:09:20.793 fused_ordering(560) 00:09:20.793 fused_ordering(561) 00:09:20.793 fused_ordering(562) 00:09:20.793 fused_ordering(563) 00:09:20.793 fused_ordering(564) 00:09:20.793 fused_ordering(565) 00:09:20.793 fused_ordering(566) 00:09:20.793 fused_ordering(567) 00:09:20.793 fused_ordering(568) 00:09:20.793 fused_ordering(569) 00:09:20.793 fused_ordering(570) 00:09:20.793 fused_ordering(571) 00:09:20.793 fused_ordering(572) 00:09:20.793 fused_ordering(573) 00:09:20.793 fused_ordering(574) 00:09:20.793 fused_ordering(575) 00:09:20.793 fused_ordering(576) 00:09:20.793 fused_ordering(577) 00:09:20.793 fused_ordering(578) 00:09:20.793 fused_ordering(579) 00:09:20.793 fused_ordering(580) 00:09:20.793 fused_ordering(581) 00:09:20.793 fused_ordering(582) 00:09:20.793 fused_ordering(583) 00:09:20.794 fused_ordering(584) 00:09:20.794 fused_ordering(585) 00:09:20.794 fused_ordering(586) 00:09:20.794 fused_ordering(587) 00:09:20.794 fused_ordering(588) 00:09:20.794 fused_ordering(589) 00:09:20.794 fused_ordering(590) 00:09:20.794 fused_ordering(591) 00:09:20.794 fused_ordering(592) 00:09:20.794 fused_ordering(593) 00:09:20.794 fused_ordering(594) 00:09:20.794 fused_ordering(595) 00:09:20.794 fused_ordering(596) 00:09:20.794 fused_ordering(597) 00:09:20.794 fused_ordering(598) 00:09:20.794 fused_ordering(599) 00:09:20.794 fused_ordering(600) 00:09:20.794 fused_ordering(601) 00:09:20.794 fused_ordering(602) 00:09:20.794 fused_ordering(603) 00:09:20.794 fused_ordering(604) 00:09:20.794 fused_ordering(605) 00:09:20.794 fused_ordering(606) 00:09:20.794 fused_ordering(607) 00:09:20.794 fused_ordering(608) 00:09:20.794 fused_ordering(609) 00:09:20.794 fused_ordering(610) 00:09:20.794 fused_ordering(611) 00:09:20.794 fused_ordering(612) 00:09:20.794 fused_ordering(613) 00:09:20.794 fused_ordering(614) 00:09:20.794 fused_ordering(615) 00:09:21.052 fused_ordering(616) 00:09:21.052 fused_ordering(617) 00:09:21.052 fused_ordering(618) 00:09:21.052 fused_ordering(619) 00:09:21.052 fused_ordering(620) 00:09:21.052 fused_ordering(621) 00:09:21.052 fused_ordering(622) 00:09:21.052 fused_ordering(623) 00:09:21.052 fused_ordering(624) 00:09:21.052 fused_ordering(625) 00:09:21.052 fused_ordering(626) 00:09:21.052 fused_ordering(627) 00:09:21.052 fused_ordering(628) 00:09:21.052 fused_ordering(629) 00:09:21.052 fused_ordering(630) 00:09:21.052 fused_ordering(631) 00:09:21.052 fused_ordering(632) 00:09:21.052 fused_ordering(633) 00:09:21.052 fused_ordering(634) 00:09:21.052 fused_ordering(635) 00:09:21.052 fused_ordering(636) 00:09:21.052 fused_ordering(637) 00:09:21.052 fused_ordering(638) 00:09:21.052 fused_ordering(639) 00:09:21.052 fused_ordering(640) 00:09:21.052 fused_ordering(641) 00:09:21.052 fused_ordering(642) 00:09:21.052 fused_ordering(643) 00:09:21.052 fused_ordering(644) 00:09:21.052 fused_ordering(645) 00:09:21.052 fused_ordering(646) 00:09:21.052 fused_ordering(647) 00:09:21.052 fused_ordering(648) 00:09:21.052 fused_ordering(649) 00:09:21.052 fused_ordering(650) 00:09:21.052 fused_ordering(651) 00:09:21.052 fused_ordering(652) 00:09:21.052 fused_ordering(653) 00:09:21.052 fused_ordering(654) 00:09:21.052 fused_ordering(655) 00:09:21.052 fused_ordering(656) 00:09:21.052 fused_ordering(657) 00:09:21.052 fused_ordering(658) 00:09:21.052 fused_ordering(659) 00:09:21.052 fused_ordering(660) 00:09:21.052 fused_ordering(661) 00:09:21.052 fused_ordering(662) 00:09:21.052 fused_ordering(663) 00:09:21.052 fused_ordering(664) 00:09:21.052 fused_ordering(665) 00:09:21.052 fused_ordering(666) 00:09:21.052 fused_ordering(667) 00:09:21.052 fused_ordering(668) 00:09:21.052 fused_ordering(669) 00:09:21.052 fused_ordering(670) 00:09:21.052 fused_ordering(671) 00:09:21.052 fused_ordering(672) 00:09:21.052 fused_ordering(673) 00:09:21.052 fused_ordering(674) 00:09:21.052 fused_ordering(675) 00:09:21.052 fused_ordering(676) 00:09:21.052 fused_ordering(677) 00:09:21.052 fused_ordering(678) 00:09:21.052 fused_ordering(679) 00:09:21.052 fused_ordering(680) 00:09:21.052 fused_ordering(681) 00:09:21.052 fused_ordering(682) 00:09:21.052 fused_ordering(683) 00:09:21.052 fused_ordering(684) 00:09:21.052 fused_ordering(685) 00:09:21.052 fused_ordering(686) 00:09:21.052 fused_ordering(687) 00:09:21.052 fused_ordering(688) 00:09:21.052 fused_ordering(689) 00:09:21.052 fused_ordering(690) 00:09:21.052 fused_ordering(691) 00:09:21.052 fused_ordering(692) 00:09:21.052 fused_ordering(693) 00:09:21.052 fused_ordering(694) 00:09:21.052 fused_ordering(695) 00:09:21.052 fused_ordering(696) 00:09:21.052 fused_ordering(697) 00:09:21.052 fused_ordering(698) 00:09:21.052 fused_ordering(699) 00:09:21.052 fused_ordering(700) 00:09:21.052 fused_ordering(701) 00:09:21.052 fused_ordering(702) 00:09:21.052 fused_ordering(703) 00:09:21.052 fused_ordering(704) 00:09:21.052 fused_ordering(705) 00:09:21.052 fused_ordering(706) 00:09:21.052 fused_ordering(707) 00:09:21.052 fused_ordering(708) 00:09:21.052 fused_ordering(709) 00:09:21.052 fused_ordering(710) 00:09:21.052 fused_ordering(711) 00:09:21.052 fused_ordering(712) 00:09:21.052 fused_ordering(713) 00:09:21.052 fused_ordering(714) 00:09:21.052 fused_ordering(715) 00:09:21.052 fused_ordering(716) 00:09:21.052 fused_ordering(717) 00:09:21.052 fused_ordering(718) 00:09:21.052 fused_ordering(719) 00:09:21.052 fused_ordering(720) 00:09:21.052 fused_ordering(721) 00:09:21.052 fused_ordering(722) 00:09:21.052 fused_ordering(723) 00:09:21.052 fused_ordering(724) 00:09:21.052 fused_ordering(725) 00:09:21.052 fused_ordering(726) 00:09:21.052 fused_ordering(727) 00:09:21.052 fused_ordering(728) 00:09:21.052 fused_ordering(729) 00:09:21.052 fused_ordering(730) 00:09:21.052 fused_ordering(731) 00:09:21.052 fused_ordering(732) 00:09:21.052 fused_ordering(733) 00:09:21.052 fused_ordering(734) 00:09:21.052 fused_ordering(735) 00:09:21.052 fused_ordering(736) 00:09:21.052 fused_ordering(737) 00:09:21.052 fused_ordering(738) 00:09:21.052 fused_ordering(739) 00:09:21.052 fused_ordering(740) 00:09:21.052 fused_ordering(741) 00:09:21.052 fused_ordering(742) 00:09:21.052 fused_ordering(743) 00:09:21.052 fused_ordering(744) 00:09:21.052 fused_ordering(745) 00:09:21.052 fused_ordering(746) 00:09:21.052 fused_ordering(747) 00:09:21.052 fused_ordering(748) 00:09:21.052 fused_ordering(749) 00:09:21.052 fused_ordering(750) 00:09:21.052 fused_ordering(751) 00:09:21.052 fused_ordering(752) 00:09:21.052 fused_ordering(753) 00:09:21.052 fused_ordering(754) 00:09:21.052 fused_ordering(755) 00:09:21.052 fused_ordering(756) 00:09:21.052 fused_ordering(757) 00:09:21.052 fused_ordering(758) 00:09:21.052 fused_ordering(759) 00:09:21.052 fused_ordering(760) 00:09:21.052 fused_ordering(761) 00:09:21.052 fused_ordering(762) 00:09:21.052 fused_ordering(763) 00:09:21.052 fused_ordering(764) 00:09:21.052 fused_ordering(765) 00:09:21.052 fused_ordering(766) 00:09:21.052 fused_ordering(767) 00:09:21.052 fused_ordering(768) 00:09:21.052 fused_ordering(769) 00:09:21.052 fused_ordering(770) 00:09:21.053 fused_ordering(771) 00:09:21.053 fused_ordering(772) 00:09:21.053 fused_ordering(773) 00:09:21.053 fused_ordering(774) 00:09:21.053 fused_ordering(775) 00:09:21.053 fused_ordering(776) 00:09:21.053 fused_ordering(777) 00:09:21.053 fused_ordering(778) 00:09:21.053 fused_ordering(779) 00:09:21.053 fused_ordering(780) 00:09:21.053 fused_ordering(781) 00:09:21.053 fused_ordering(782) 00:09:21.053 fused_ordering(783) 00:09:21.053 fused_ordering(784) 00:09:21.053 fused_ordering(785) 00:09:21.053 fused_ordering(786) 00:09:21.053 fused_ordering(787) 00:09:21.053 fused_ordering(788) 00:09:21.053 fused_ordering(789) 00:09:21.053 fused_ordering(790) 00:09:21.053 fused_ordering(791) 00:09:21.053 fused_ordering(792) 00:09:21.053 fused_ordering(793) 00:09:21.053 fused_ordering(794) 00:09:21.053 fused_ordering(795) 00:09:21.053 fused_ordering(796) 00:09:21.053 fused_ordering(797) 00:09:21.053 fused_ordering(798) 00:09:21.053 fused_ordering(799) 00:09:21.053 fused_ordering(800) 00:09:21.053 fused_ordering(801) 00:09:21.053 fused_ordering(802) 00:09:21.053 fused_ordering(803) 00:09:21.053 fused_ordering(804) 00:09:21.053 fused_ordering(805) 00:09:21.053 fused_ordering(806) 00:09:21.053 fused_ordering(807) 00:09:21.053 fused_ordering(808) 00:09:21.053 fused_ordering(809) 00:09:21.053 fused_ordering(810) 00:09:21.053 fused_ordering(811) 00:09:21.053 fused_ordering(812) 00:09:21.053 fused_ordering(813) 00:09:21.053 fused_ordering(814) 00:09:21.053 fused_ordering(815) 00:09:21.053 fused_ordering(816) 00:09:21.053 fused_ordering(817) 00:09:21.053 fused_ordering(818) 00:09:21.053 fused_ordering(819) 00:09:21.053 fused_ordering(820) 00:09:21.617 fused_ordering(821) 00:09:21.617 fused_ordering(822) 00:09:21.617 fused_ordering(823) 00:09:21.617 fused_ordering(824) 00:09:21.617 fused_ordering(825) 00:09:21.617 fused_ordering(826) 00:09:21.617 fused_ordering(827) 00:09:21.617 fused_ordering(828) 00:09:21.617 fused_ordering(829) 00:09:21.617 fused_ordering(830) 00:09:21.617 fused_ordering(831) 00:09:21.617 fused_ordering(832) 00:09:21.617 fused_ordering(833) 00:09:21.617 fused_ordering(834) 00:09:21.617 fused_ordering(835) 00:09:21.617 fused_ordering(836) 00:09:21.617 fused_ordering(837) 00:09:21.617 fused_ordering(838) 00:09:21.617 fused_ordering(839) 00:09:21.617 fused_ordering(840) 00:09:21.617 fused_ordering(841) 00:09:21.617 fused_ordering(842) 00:09:21.617 fused_ordering(843) 00:09:21.617 fused_ordering(844) 00:09:21.617 fused_ordering(845) 00:09:21.617 fused_ordering(846) 00:09:21.617 fused_ordering(847) 00:09:21.617 fused_ordering(848) 00:09:21.617 fused_ordering(849) 00:09:21.617 fused_ordering(850) 00:09:21.617 fused_ordering(851) 00:09:21.617 fused_ordering(852) 00:09:21.617 fused_ordering(853) 00:09:21.617 fused_ordering(854) 00:09:21.617 fused_ordering(855) 00:09:21.617 fused_ordering(856) 00:09:21.617 fused_ordering(857) 00:09:21.617 fused_ordering(858) 00:09:21.617 fused_ordering(859) 00:09:21.617 fused_ordering(860) 00:09:21.617 fused_ordering(861) 00:09:21.617 fused_ordering(862) 00:09:21.617 fused_ordering(863) 00:09:21.617 fused_ordering(864) 00:09:21.617 fused_ordering(865) 00:09:21.617 fused_ordering(866) 00:09:21.617 fused_ordering(867) 00:09:21.617 fused_ordering(868) 00:09:21.617 fused_ordering(869) 00:09:21.617 fused_ordering(870) 00:09:21.617 fused_ordering(871) 00:09:21.617 fused_ordering(872) 00:09:21.617 fused_ordering(873) 00:09:21.617 fused_ordering(874) 00:09:21.617 fused_ordering(875) 00:09:21.617 fused_ordering(876) 00:09:21.617 fused_ordering(877) 00:09:21.617 fused_ordering(878) 00:09:21.617 fused_ordering(879) 00:09:21.617 fused_ordering(880) 00:09:21.617 fused_ordering(881) 00:09:21.617 fused_ordering(882) 00:09:21.617 fused_ordering(883) 00:09:21.617 fused_ordering(884) 00:09:21.617 fused_ordering(885) 00:09:21.617 fused_ordering(886) 00:09:21.617 fused_ordering(887) 00:09:21.617 fused_ordering(888) 00:09:21.617 fused_ordering(889) 00:09:21.617 fused_ordering(890) 00:09:21.617 fused_ordering(891) 00:09:21.617 fused_ordering(892) 00:09:21.617 fused_ordering(893) 00:09:21.617 fused_ordering(894) 00:09:21.617 fused_ordering(895) 00:09:21.617 fused_ordering(896) 00:09:21.617 fused_ordering(897) 00:09:21.617 fused_ordering(898) 00:09:21.617 fused_ordering(899) 00:09:21.617 fused_ordering(900) 00:09:21.617 fused_ordering(901) 00:09:21.617 fused_ordering(902) 00:09:21.617 fused_ordering(903) 00:09:21.617 fused_ordering(904) 00:09:21.617 fused_ordering(905) 00:09:21.617 fused_ordering(906) 00:09:21.617 fused_ordering(907) 00:09:21.617 fused_ordering(908) 00:09:21.617 fused_ordering(909) 00:09:21.617 fused_ordering(910) 00:09:21.617 fused_ordering(911) 00:09:21.617 fused_ordering(912) 00:09:21.617 fused_ordering(913) 00:09:21.617 fused_ordering(914) 00:09:21.617 fused_ordering(915) 00:09:21.617 fused_ordering(916) 00:09:21.617 fused_ordering(917) 00:09:21.617 fused_ordering(918) 00:09:21.617 fused_ordering(919) 00:09:21.617 fused_ordering(920) 00:09:21.617 fused_ordering(921) 00:09:21.617 fused_ordering(922) 00:09:21.617 fused_ordering(923) 00:09:21.617 fused_ordering(924) 00:09:21.617 fused_ordering(925) 00:09:21.617 fused_ordering(926) 00:09:21.617 fused_ordering(927) 00:09:21.617 fused_ordering(928) 00:09:21.617 fused_ordering(929) 00:09:21.617 fused_ordering(930) 00:09:21.617 fused_ordering(931) 00:09:21.617 fused_ordering(932) 00:09:21.617 fused_ordering(933) 00:09:21.617 fused_ordering(934) 00:09:21.617 fused_ordering(935) 00:09:21.617 fused_ordering(936) 00:09:21.617 fused_ordering(937) 00:09:21.617 fused_ordering(938) 00:09:21.617 fused_ordering(939) 00:09:21.617 fused_ordering(940) 00:09:21.617 fused_ordering(941) 00:09:21.617 fused_ordering(942) 00:09:21.617 fused_ordering(943) 00:09:21.617 fused_ordering(944) 00:09:21.617 fused_ordering(945) 00:09:21.617 fused_ordering(946) 00:09:21.617 fused_ordering(947) 00:09:21.617 fused_ordering(948) 00:09:21.617 fused_ordering(949) 00:09:21.617 fused_ordering(950) 00:09:21.617 fused_ordering(951) 00:09:21.617 fused_ordering(952) 00:09:21.617 fused_ordering(953) 00:09:21.617 fused_ordering(954) 00:09:21.617 fused_ordering(955) 00:09:21.617 fused_ordering(956) 00:09:21.617 fused_ordering(957) 00:09:21.617 fused_ordering(958) 00:09:21.617 fused_ordering(959) 00:09:21.617 fused_ordering(960) 00:09:21.618 fused_ordering(961) 00:09:21.618 fused_ordering(962) 00:09:21.618 fused_ordering(963) 00:09:21.618 fused_ordering(964) 00:09:21.618 fused_ordering(965) 00:09:21.618 fused_ordering(966) 00:09:21.618 fused_ordering(967) 00:09:21.618 fused_ordering(968) 00:09:21.618 fused_ordering(969) 00:09:21.618 fused_ordering(970) 00:09:21.618 fused_ordering(971) 00:09:21.618 fused_ordering(972) 00:09:21.618 fused_ordering(973) 00:09:21.618 fused_ordering(974) 00:09:21.618 fused_ordering(975) 00:09:21.618 fused_ordering(976) 00:09:21.618 fused_ordering(977) 00:09:21.618 fused_ordering(978) 00:09:21.618 fused_ordering(979) 00:09:21.618 fused_ordering(980) 00:09:21.618 fused_ordering(981) 00:09:21.618 fused_ordering(982) 00:09:21.618 fused_ordering(983) 00:09:21.618 fused_ordering(984) 00:09:21.618 fused_ordering(985) 00:09:21.618 fused_ordering(986) 00:09:21.618 fused_ordering(987) 00:09:21.618 fused_ordering(988) 00:09:21.618 fused_ordering(989) 00:09:21.618 fused_ordering(990) 00:09:21.618 fused_ordering(991) 00:09:21.618 fused_ordering(992) 00:09:21.618 fused_ordering(993) 00:09:21.618 fused_ordering(994) 00:09:21.618 fused_ordering(995) 00:09:21.618 fused_ordering(996) 00:09:21.618 fused_ordering(997) 00:09:21.618 fused_ordering(998) 00:09:21.618 fused_ordering(999) 00:09:21.618 fused_ordering(1000) 00:09:21.618 fused_ordering(1001) 00:09:21.618 fused_ordering(1002) 00:09:21.618 fused_ordering(1003) 00:09:21.618 fused_ordering(1004) 00:09:21.618 fused_ordering(1005) 00:09:21.618 fused_ordering(1006) 00:09:21.618 fused_ordering(1007) 00:09:21.618 fused_ordering(1008) 00:09:21.618 fused_ordering(1009) 00:09:21.618 fused_ordering(1010) 00:09:21.618 fused_ordering(1011) 00:09:21.618 fused_ordering(1012) 00:09:21.618 fused_ordering(1013) 00:09:21.618 fused_ordering(1014) 00:09:21.618 fused_ordering(1015) 00:09:21.618 fused_ordering(1016) 00:09:21.618 fused_ordering(1017) 00:09:21.618 fused_ordering(1018) 00:09:21.618 fused_ordering(1019) 00:09:21.618 fused_ordering(1020) 00:09:21.618 fused_ordering(1021) 00:09:21.618 fused_ordering(1022) 00:09:21.618 fused_ordering(1023) 00:09:21.618 22:56:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:21.618 22:56:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:21.618 22:56:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:21.618 22:56:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:21.618 22:56:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:21.618 22:56:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:21.618 22:56:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:21.618 22:56:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:21.618 rmmod nvme_tcp 00:09:21.618 rmmod nvme_fabrics 00:09:21.876 rmmod nvme_keyring 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 70754 ']' 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 70754 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 70754 ']' 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 70754 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70754 00:09:21.876 killing process with pid 70754 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70754' 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 70754 00:09:21.876 [2024-05-14 22:56:34.068076] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 70754 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.876 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.133 22:56:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:22.133 00:09:22.133 real 0m3.417s 00:09:22.133 user 0m4.102s 00:09:22.133 sys 0m1.316s 00:09:22.133 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.133 ************************************ 00:09:22.133 END TEST nvmf_fused_ordering 00:09:22.133 ************************************ 00:09:22.133 22:56:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:22.133 22:56:34 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:22.133 22:56:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:22.133 22:56:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.133 22:56:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.133 ************************************ 00:09:22.133 START TEST nvmf_delete_subsystem 00:09:22.133 ************************************ 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:22.133 * Looking for test storage... 00:09:22.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:22.133 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:22.134 Cannot find device "nvmf_tgt_br" 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.134 Cannot find device "nvmf_tgt_br2" 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:22.134 Cannot find device "nvmf_tgt_br" 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:22.134 Cannot find device "nvmf_tgt_br2" 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:09:22.134 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:22.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:22.390 00:09:22.390 --- 10.0.0.2 ping statistics --- 00:09:22.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.390 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:22.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:22.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:09:22.390 00:09:22.390 --- 10.0.0.3 ping statistics --- 00:09:22.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.390 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:22.390 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:22.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:22.648 00:09:22.648 --- 10.0.0.1 ping statistics --- 00:09:22.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.648 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=70969 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 70969 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 70969 ']' 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:22.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:22.648 22:56:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:22.648 [2024-05-14 22:56:34.853976] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:22.648 [2024-05-14 22:56:34.854248] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.648 [2024-05-14 22:56:34.990789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:22.905 [2024-05-14 22:56:35.062712] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.905 [2024-05-14 22:56:35.062773] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.905 [2024-05-14 22:56:35.062788] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.905 [2024-05-14 22:56:35.062797] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.905 [2024-05-14 22:56:35.062806] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.905 [2024-05-14 22:56:35.062963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.905 [2024-05-14 22:56:35.063061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.469 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:23.469 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:09:23.469 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.469 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.469 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.726 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.726 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.726 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.726 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.727 [2024-05-14 22:56:35.898662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.727 [2024-05-14 22:56:35.914578] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:23.727 [2024-05-14 22:56:35.914825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.727 NULL1 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.727 Delay0 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71020 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:23.727 22:56:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:23.727 [2024-05-14 22:56:36.109278] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:25.624 22:56:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.624 22:56:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.624 22:56:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 [2024-05-14 22:56:38.149708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2220 is same with the state(5) to be set 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 starting I/O failed: -6 00:09:25.883 [2024-05-14 22:56:38.150941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb558000c00 is same with the state(5) to be set 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Write completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.883 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Write completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Write completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Write completed with error (sct=0, sc=8) 00:09:25.884 Write completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Write completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Write completed with error (sct=0, sc=8) 00:09:25.884 Write completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Write completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:25.884 Read completed with error (sct=0, sc=8) 00:09:26.819 [2024-05-14 22:56:39.123067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0100 is same with the state(5) to be set 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 [2024-05-14 22:56:39.151361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb55800bfe0 is same with the state(5) to be set 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 [2024-05-14 22:56:39.151624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb55800c780 is same with the state(5) to be set 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 [2024-05-14 22:56:39.152480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2040 is same with the state(5) to be set 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Write completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 Read completed with error (sct=0, sc=8) 00:09:26.819 [2024-05-14 22:56:39.153117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ce0 is same with the state(5) to be set 00:09:26.819 Initializing NVMe Controllers 00:09:26.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:26.819 Controller IO queue size 128, less than required. 00:09:26.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:26.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:26.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:26.819 Initialization complete. Launching workers. 00:09:26.819 ======================================================== 00:09:26.819 Latency(us) 00:09:26.819 Device Information : IOPS MiB/s Average min max 00:09:26.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.01 0.08 898517.54 534.15 1017144.77 00:09:26.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.50 0.08 913807.13 381.31 2002461.55 00:09:26.819 ======================================================== 00:09:26.819 Total : 339.51 0.17 906195.82 381.31 2002461.55 00:09:26.819 00:09:26.819 [2024-05-14 22:56:39.153499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df0100 (9): Bad file descriptor 00:09:26.819 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:26.819 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.819 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:26.819 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71020 00:09:26.820 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71020 00:09:27.396 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71020) - No such process 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71020 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71020 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71020 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.396 [2024-05-14 22:56:39.678195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71071 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71071 00:09:27.396 22:56:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:27.655 [2024-05-14 22:56:39.848725] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:27.913 22:56:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:27.913 22:56:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71071 00:09:27.913 22:56:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:28.480 22:56:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:28.480 22:56:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71071 00:09:28.480 22:56:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:29.046 22:56:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:29.046 22:56:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71071 00:09:29.046 22:56:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:29.613 22:56:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:29.613 22:56:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71071 00:09:29.613 22:56:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:29.871 22:56:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:29.871 22:56:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71071 00:09:29.871 22:56:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.438 22:56:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.438 22:56:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71071 00:09:30.438 22:56:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.696 Initializing NVMe Controllers 00:09:30.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:30.696 Controller IO queue size 128, less than required. 00:09:30.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:30.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:30.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:30.696 Initialization complete. Launching workers. 00:09:30.696 ======================================================== 00:09:30.696 Latency(us) 00:09:30.696 Device Information : IOPS MiB/s Average min max 00:09:30.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005569.87 1000196.69 1011909.40 00:09:30.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003706.03 1000145.89 1041898.74 00:09:30.696 ======================================================== 00:09:30.696 Total : 256.00 0.12 1004637.95 1000145.89 1041898.74 00:09:30.696 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71071 00:09:30.955 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71071) - No such process 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71071 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.955 rmmod nvme_tcp 00:09:30.955 rmmod nvme_fabrics 00:09:30.955 rmmod nvme_keyring 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 70969 ']' 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 70969 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 70969 ']' 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 70969 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70969 00:09:30.955 killing process with pid 70969 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70969' 00:09:30.955 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 70969 00:09:30.955 [2024-05-14 22:56:43.328225] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:30.956 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 70969 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:31.213 00:09:31.213 real 0m9.197s 00:09:31.213 user 0m28.628s 00:09:31.213 sys 0m1.483s 00:09:31.213 ************************************ 00:09:31.213 END TEST nvmf_delete_subsystem 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:31.213 22:56:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.214 ************************************ 00:09:31.214 22:56:43 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:31.214 22:56:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:31.214 22:56:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:31.214 22:56:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:31.214 ************************************ 00:09:31.214 START TEST nvmf_ns_masking 00:09:31.214 ************************************ 00:09:31.214 22:56:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:31.472 * Looking for test storage... 00:09:31.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:31.472 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=edbf1ec1-b6c8-4b31-a794-e4ecd98c0cf9 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:31.473 Cannot find device "nvmf_tgt_br" 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.473 Cannot find device "nvmf_tgt_br2" 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:31.473 Cannot find device "nvmf_tgt_br" 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:31.473 Cannot find device "nvmf_tgt_br2" 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.473 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:31.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:09:31.732 00:09:31.732 --- 10.0.0.2 ping statistics --- 00:09:31.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.732 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:31.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:09:31.732 00:09:31.732 --- 10.0.0.3 ping statistics --- 00:09:31.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.732 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:31.732 00:09:31.732 --- 10.0.0.1 ping statistics --- 00:09:31.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.732 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.732 22:56:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.732 22:56:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:31.732 22:56:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.732 22:56:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:31.732 22:56:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:31.732 22:56:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=71301 00:09:31.732 22:56:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.732 22:56:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 71301 00:09:31.733 22:56:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 71301 ']' 00:09:31.733 22:56:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.733 22:56:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:31.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.733 22:56:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.733 22:56:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:31.733 22:56:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:31.733 [2024-05-14 22:56:44.067689] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:31.733 [2024-05-14 22:56:44.067790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.991 [2024-05-14 22:56:44.204994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.991 [2024-05-14 22:56:44.265908] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.991 [2024-05-14 22:56:44.265964] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.991 [2024-05-14 22:56:44.265976] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.991 [2024-05-14 22:56:44.265984] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.991 [2024-05-14 22:56:44.265993] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.991 [2024-05-14 22:56:44.266096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.991 [2024-05-14 22:56:44.266237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.991 [2024-05-14 22:56:44.266689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.992 [2024-05-14 22:56:44.266722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.926 22:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:32.926 22:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:09:32.926 22:56:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.926 22:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.926 22:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:32.926 22:56:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.926 22:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.926 [2024-05-14 22:56:45.300416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.185 22:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:33.185 22:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:33.185 22:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:33.443 Malloc1 00:09:33.443 22:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:33.701 Malloc2 00:09:33.701 22:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:33.959 22:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:34.217 22:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.476 [2024-05-14 22:56:46.689888] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:34.476 [2024-05-14 22:56:46.690267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.476 22:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:09:34.476 22:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I edbf1ec1-b6c8-4b31-a794-e4ecd98c0cf9 -a 10.0.0.2 -s 4420 -i 4 00:09:34.476 22:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.476 22:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:34.476 22:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.476 22:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:34.476 22:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:37.005 [ 0]:0x1 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=95eadb5bc5614633b097851846c0cd69 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 95eadb5bc5614633b097851846c0cd69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.005 22:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:37.005 [ 0]:0x1 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=95eadb5bc5614633b097851846c0cd69 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 95eadb5bc5614633b097851846c0cd69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:37.005 [ 1]:0x2 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3e2ae84becc04d81863edc83f0ed3fa1 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3e2ae84becc04d81863edc83f0ed3fa1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.005 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.264 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:37.523 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:09:37.523 22:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I edbf1ec1-b6c8-4b31-a794-e4ecd98c0cf9 -a 10.0.0.2 -s 4420 -i 4 00:09:37.781 22:56:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:37.781 22:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:37.781 22:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.781 22:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:09:37.781 22:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:09:37.781 22:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:39.679 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:39.679 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:39.679 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.679 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:39.679 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.679 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:39.679 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:39.679 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:39.937 [ 0]:0x2 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3e2ae84becc04d81863edc83f0ed3fa1 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3e2ae84becc04d81863edc83f0ed3fa1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:39.937 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:40.196 [ 0]:0x1 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=95eadb5bc5614633b097851846c0cd69 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 95eadb5bc5614633b097851846c0cd69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:40.196 [ 1]:0x2 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3e2ae84becc04d81863edc83f0ed3fa1 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3e2ae84becc04d81863edc83f0ed3fa1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.196 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:40.762 [ 0]:0x2 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3e2ae84becc04d81863edc83f0ed3fa1 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3e2ae84becc04d81863edc83f0ed3fa1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:09:40.762 22:56:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.762 22:56:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:41.021 22:56:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:09:41.021 22:56:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I edbf1ec1-b6c8-4b31-a794-e4ecd98c0cf9 -a 10.0.0.2 -s 4420 -i 4 00:09:41.021 22:56:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:41.021 22:56:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:41.021 22:56:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.021 22:56:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:41.021 22:56:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:41.021 22:56:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:43.552 [ 0]:0x1 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=95eadb5bc5614633b097851846c0cd69 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 95eadb5bc5614633b097851846c0cd69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:43.552 [ 1]:0x2 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3e2ae84becc04d81863edc83f0ed3fa1 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3e2ae84becc04d81863edc83f0ed3fa1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:43.552 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:43.811 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:43.811 22:56:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:43.811 [ 0]:0x2 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3e2ae84becc04d81863edc83f0ed3fa1 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3e2ae84becc04d81863edc83f0ed3fa1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:43.811 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:44.070 [2024-05-14 22:56:56.295012] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:44.070 2024/05/14 22:56:56 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:09:44.070 request: 00:09:44.070 { 00:09:44.070 "method": "nvmf_ns_remove_host", 00:09:44.070 "params": { 00:09:44.070 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.070 "nsid": 2, 00:09:44.070 "host": "nqn.2016-06.io.spdk:host1" 00:09:44.070 } 00:09:44.070 } 00:09:44.070 Got JSON-RPC error response 00:09:44.070 GoRPCClient: error on JSON-RPC call 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:44.070 [ 0]:0x2 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=3e2ae84becc04d81863edc83f0ed3fa1 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 3e2ae84becc04d81863edc83f0ed3fa1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:09:44.070 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.328 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.586 rmmod nvme_tcp 00:09:44.586 rmmod nvme_fabrics 00:09:44.586 rmmod nvme_keyring 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 71301 ']' 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 71301 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 71301 ']' 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 71301 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71301 00:09:44.586 killing process with pid 71301 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71301' 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 71301 00:09:44.586 [2024-05-14 22:56:56.899266] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:44.586 22:56:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 71301 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:44.845 ************************************ 00:09:44.845 END TEST nvmf_ns_masking 00:09:44.845 ************************************ 00:09:44.845 00:09:44.845 real 0m13.581s 00:09:44.845 user 0m54.617s 00:09:44.845 sys 0m2.325s 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:44.845 22:56:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:44.845 22:56:57 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:09:44.845 22:56:57 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:09:44.845 22:56:57 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:44.845 22:56:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:44.845 22:56:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:44.845 22:56:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.845 ************************************ 00:09:44.845 START TEST nvmf_host_management 00:09:44.845 ************************************ 00:09:44.845 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:45.103 * Looking for test storage... 00:09:45.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.103 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:45.104 Cannot find device "nvmf_tgt_br" 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.104 Cannot find device "nvmf_tgt_br2" 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:45.104 Cannot find device "nvmf_tgt_br" 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:45.104 Cannot find device "nvmf_tgt_br2" 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.104 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:45.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:09:45.363 00:09:45.363 --- 10.0.0.2 ping statistics --- 00:09:45.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.363 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:45.363 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.363 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:09:45.363 00:09:45.363 --- 10.0.0.3 ping statistics --- 00:09:45.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.363 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:09:45.363 00:09:45.363 --- 10.0.0.1 ping statistics --- 00:09:45.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.363 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=71859 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 71859 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 71859 ']' 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:45.363 22:56:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.363 [2024-05-14 22:56:57.741801] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:45.363 [2024-05-14 22:56:57.741911] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.622 [2024-05-14 22:56:57.881532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.622 [2024-05-14 22:56:57.944825] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.622 [2024-05-14 22:56:57.945063] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.622 [2024-05-14 22:56:57.945213] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.622 [2024-05-14 22:56:57.945451] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.622 [2024-05-14 22:56:57.945589] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.622 [2024-05-14 22:56:57.945817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.622 [2024-05-14 22:56:57.946031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.622 [2024-05-14 22:56:57.946089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:45.622 [2024-05-14 22:56:57.946091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.880 [2024-05-14 22:56:58.077184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:45.880 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.881 Malloc0 00:09:45.881 [2024-05-14 22:56:58.142626] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:45.881 [2024-05-14 22:56:58.143462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=71917 00:09:45.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 71917 /var/tmp/bdevperf.sock 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 71917 ']' 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:45.881 { 00:09:45.881 "params": { 00:09:45.881 "name": "Nvme$subsystem", 00:09:45.881 "trtype": "$TEST_TRANSPORT", 00:09:45.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.881 "adrfam": "ipv4", 00:09:45.881 "trsvcid": "$NVMF_PORT", 00:09:45.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.881 "hdgst": ${hdgst:-false}, 00:09:45.881 "ddgst": ${ddgst:-false} 00:09:45.881 }, 00:09:45.881 "method": "bdev_nvme_attach_controller" 00:09:45.881 } 00:09:45.881 EOF 00:09:45.881 )") 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:45.881 22:56:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:45.881 "params": { 00:09:45.881 "name": "Nvme0", 00:09:45.881 "trtype": "tcp", 00:09:45.881 "traddr": "10.0.0.2", 00:09:45.881 "adrfam": "ipv4", 00:09:45.881 "trsvcid": "4420", 00:09:45.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:45.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:45.881 "hdgst": false, 00:09:45.881 "ddgst": false 00:09:45.881 }, 00:09:45.881 "method": "bdev_nvme_attach_controller" 00:09:45.881 }' 00:09:45.881 [2024-05-14 22:56:58.256240] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:45.881 [2024-05-14 22:56:58.256582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71917 ] 00:09:46.139 [2024-05-14 22:56:58.397327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.139 [2024-05-14 22:56:58.461737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.397 Running I/O for 10 seconds... 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:46.397 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:46.655 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:46.655 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:46.655 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:46.655 22:56:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:46.655 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.655 22:56:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.655 22:56:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.655 22:56:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:09:46.655 22:56:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:09:46.655 22:56:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:46.655 22:56:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:46.655 22:56:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:46.655 22:56:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:46.655 22:56:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.655 22:56:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.914 [2024-05-14 22:56:59.043986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61d10 is same with the state(5) to be set 00:09:46.914 [2024-05-14 22:56:59.044269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.914 [2024-05-14 22:56:59.044300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.914 [2024-05-14 22:56:59.044324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.914 [2024-05-14 22:56:59.044335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.914 [2024-05-14 22:56:59.044347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.914 [2024-05-14 22:56:59.044356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.914 [2024-05-14 22:56:59.044368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.914 [2024-05-14 22:56:59.044378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.914 [2024-05-14 22:56:59.044389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.914 [2024-05-14 22:56:59.044399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.914 [2024-05-14 22:56:59.044411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.914 [2024-05-14 22:56:59.044420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.914 [2024-05-14 22:56:59.044432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.914 [2024-05-14 22:56:59.044441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.914 [2024-05-14 22:56:59.044453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.914 [2024-05-14 22:56:59.044462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.044986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.044995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.915 [2024-05-14 22:56:59.045375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.915 [2024-05-14 22:56:59.045386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045408] nvme_qpair.c: 47task offset: 72832 on job bdev=Nvme0n1 fails 00:09:46.916 00:09:46.916 Latency(us) 00:09:46.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.916 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:46.916 Job: Nvme0n1 ended in about 0.44 seconds with error 00:09:46.916 Verification LBA range: start 0x0 length 0x400 00:09:46.916 Nvme0n1 : 0.44 1160.42 72.53 145.05 0.00 47427.93 2398.02 47424.23 00:09:46.916 =================================================================================================================== 00:09:46.916 Total : 1160.42 72.53 145.05 0.00 47427.93 2398.02 47424.23 00:09:46.916 4:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:46.916 [2024-05-14 22:56:59.045702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.045712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181b4f0 is same with the state(5) to be set 00:09:46.916 [2024-05-14 22:56:59.045777] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x181b4f0 was disconnected and freed. reset controller. 00:09:46.916 [2024-05-14 22:56:59.047000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:46.916 [2024-05-14 22:56:59.049386] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:46.916 [2024-05-14 22:56:59.049425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1819740 (9): Bad file descriptor 00:09:46.916 22:56:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.916 22:56:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:46.916 22:56:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.916 [2024-05-14 22:56:59.053369] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:09:46.916 22:56:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.916 [2024-05-14 22:56:59.053494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:09:46.916 [2024-05-14 22:56:59.053522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.916 [2024-05-14 22:56:59.053542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:09:46.916 [2024-05-14 22:56:59.053553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:09:46.916 [2024-05-14 22:56:59.053562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:09:46.916 [2024-05-14 22:56:59.053571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1819740 00:09:46.916 [2024-05-14 22:56:59.053610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1819740 (9): Bad file descriptor 00:09:46.916 [2024-05-14 22:56:59.053629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:09:46.916 [2024-05-14 22:56:59.053640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:09:46.916 [2024-05-14 22:56:59.053650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:09:46.916 [2024-05-14 22:56:59.053668] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:46.916 22:56:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.916 22:56:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 71917 00:09:47.851 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71917) - No such process 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:47.851 { 00:09:47.851 "params": { 00:09:47.851 "name": "Nvme$subsystem", 00:09:47.851 "trtype": "$TEST_TRANSPORT", 00:09:47.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.851 "adrfam": "ipv4", 00:09:47.851 "trsvcid": "$NVMF_PORT", 00:09:47.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.851 "hdgst": ${hdgst:-false}, 00:09:47.851 "ddgst": ${ddgst:-false} 00:09:47.851 }, 00:09:47.851 "method": "bdev_nvme_attach_controller" 00:09:47.851 } 00:09:47.851 EOF 00:09:47.851 )") 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:47.851 22:57:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:47.851 "params": { 00:09:47.851 "name": "Nvme0", 00:09:47.851 "trtype": "tcp", 00:09:47.851 "traddr": "10.0.0.2", 00:09:47.851 "adrfam": "ipv4", 00:09:47.851 "trsvcid": "4420", 00:09:47.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:47.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:47.851 "hdgst": false, 00:09:47.851 "ddgst": false 00:09:47.851 }, 00:09:47.851 "method": "bdev_nvme_attach_controller" 00:09:47.851 }' 00:09:47.851 [2024-05-14 22:57:00.112871] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:47.851 [2024-05-14 22:57:00.112960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71963 ] 00:09:48.109 [2024-05-14 22:57:00.245237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.109 [2024-05-14 22:57:00.319069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.109 Running I/O for 1 seconds... 00:09:49.485 00:09:49.485 Latency(us) 00:09:49.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.485 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:49.485 Verification LBA range: start 0x0 length 0x400 00:09:49.485 Nvme0n1 : 1.02 1438.83 89.93 0.00 0.00 43469.27 9651.67 43372.92 00:09:49.485 =================================================================================================================== 00:09:49.485 Total : 1438.83 89.93 0.00 0.00 43469.27 9651.67 43372.92 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.485 rmmod nvme_tcp 00:09:49.485 rmmod nvme_fabrics 00:09:49.485 rmmod nvme_keyring 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 71859 ']' 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 71859 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 71859 ']' 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 71859 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71859 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:49.485 killing process with pid 71859 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71859' 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 71859 00:09:49.485 [2024-05-14 22:57:01.830821] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:49.485 22:57:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 71859 00:09:49.744 [2024-05-14 22:57:02.020246] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:49.744 ************************************ 00:09:49.744 END TEST nvmf_host_management 00:09:49.744 ************************************ 00:09:49.744 00:09:49.744 real 0m4.862s 00:09:49.744 user 0m18.405s 00:09:49.744 sys 0m1.204s 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:49.744 22:57:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.744 22:57:02 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:49.744 22:57:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:49.744 22:57:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:49.744 22:57:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.744 ************************************ 00:09:49.744 START TEST nvmf_lvol 00:09:49.744 ************************************ 00:09:49.744 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:50.004 * Looking for test storage... 00:09:50.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.004 22:57:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:50.005 Cannot find device "nvmf_tgt_br" 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.005 Cannot find device "nvmf_tgt_br2" 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:50.005 Cannot find device "nvmf_tgt_br" 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:50.005 Cannot find device "nvmf_tgt_br2" 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:50.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:50.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:50.005 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:50.264 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:50.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:09:50.265 00:09:50.265 --- 10.0.0.2 ping statistics --- 00:09:50.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.265 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:50.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:50.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:09:50.265 00:09:50.265 --- 10.0.0.3 ping statistics --- 00:09:50.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.265 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:50.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:50.265 00:09:50.265 --- 10.0.0.1 ping statistics --- 00:09:50.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.265 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:50.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=72166 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 72166 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 72166 ']' 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:50.265 22:57:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:50.265 [2024-05-14 22:57:02.614391] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:50.265 [2024-05-14 22:57:02.614495] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.524 [2024-05-14 22:57:02.755975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.524 [2024-05-14 22:57:02.843623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.524 [2024-05-14 22:57:02.844000] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.524 [2024-05-14 22:57:02.844198] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.524 [2024-05-14 22:57:02.844396] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.524 [2024-05-14 22:57:02.844596] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.524 [2024-05-14 22:57:02.844943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.524 [2024-05-14 22:57:02.845016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.524 [2024-05-14 22:57:02.845030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.457 22:57:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:51.457 22:57:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:09:51.457 22:57:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.457 22:57:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.457 22:57:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:51.457 22:57:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.457 22:57:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:51.713 [2024-05-14 22:57:04.003053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.713 22:57:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.971 22:57:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:51.971 22:57:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.535 22:57:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:52.535 22:57:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:52.794 22:57:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:53.067 22:57:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ba50aea3-214b-4367-a307-a11a221ccd38 00:09:53.067 22:57:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ba50aea3-214b-4367-a307-a11a221ccd38 lvol 20 00:09:53.360 22:57:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=accf6ef8-52dc-401b-a7c0-0d3b70eb3d8e 00:09:53.360 22:57:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:53.619 22:57:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 accf6ef8-52dc-401b-a7c0-0d3b70eb3d8e 00:09:53.878 22:57:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:54.136 [2024-05-14 22:57:06.322872] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:54.136 [2024-05-14 22:57:06.323515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.136 22:57:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:54.395 22:57:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=72324 00:09:54.396 22:57:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:54.396 22:57:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:55.329 22:57:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot accf6ef8-52dc-401b-a7c0-0d3b70eb3d8e MY_SNAPSHOT 00:09:55.895 22:57:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=63b5c7ba-70f5-426c-a026-a48536e14bc3 00:09:55.895 22:57:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize accf6ef8-52dc-401b-a7c0-0d3b70eb3d8e 30 00:09:56.153 22:57:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 63b5c7ba-70f5-426c-a026-a48536e14bc3 MY_CLONE 00:09:56.412 22:57:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b92592fd-88a8-427f-86fb-d13200bce70a 00:09:56.412 22:57:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b92592fd-88a8-427f-86fb-d13200bce70a 00:09:56.995 22:57:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 72324 00:10:05.105 Initializing NVMe Controllers 00:10:05.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:05.105 Controller IO queue size 128, less than required. 00:10:05.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:05.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:05.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:05.105 Initialization complete. Launching workers. 00:10:05.105 ======================================================== 00:10:05.105 Latency(us) 00:10:05.105 Device Information : IOPS MiB/s Average min max 00:10:05.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10402.50 40.63 12308.51 1579.25 73232.39 00:10:05.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10380.70 40.55 12336.04 3431.40 70681.27 00:10:05.105 ======================================================== 00:10:05.105 Total : 20783.20 81.18 12322.26 1579.25 73232.39 00:10:05.105 00:10:05.105 22:57:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:05.105 22:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete accf6ef8-52dc-401b-a7c0-0d3b70eb3d8e 00:10:05.105 22:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba50aea3-214b-4367-a307-a11a221ccd38 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.672 rmmod nvme_tcp 00:10:05.672 rmmod nvme_fabrics 00:10:05.672 rmmod nvme_keyring 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 72166 ']' 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 72166 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 72166 ']' 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 72166 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72166 00:10:05.672 killing process with pid 72166 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72166' 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 72166 00:10:05.672 [2024-05-14 22:57:17.880974] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:05.672 22:57:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 72166 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:05.931 ************************************ 00:10:05.931 END TEST nvmf_lvol 00:10:05.931 ************************************ 00:10:05.931 00:10:05.931 real 0m15.998s 00:10:05.931 user 1m7.283s 00:10:05.931 sys 0m3.766s 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:05.931 22:57:18 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:05.931 22:57:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:05.931 22:57:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:05.931 22:57:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:05.931 ************************************ 00:10:05.931 START TEST nvmf_lvs_grow 00:10:05.931 ************************************ 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:05.931 * Looking for test storage... 00:10:05.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:05.931 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:06.190 Cannot find device "nvmf_tgt_br" 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.190 Cannot find device "nvmf_tgt_br2" 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:06.190 Cannot find device "nvmf_tgt_br" 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:06.190 Cannot find device "nvmf_tgt_br2" 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:06.190 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:06.449 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:06.449 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:06.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:10:06.450 00:10:06.450 --- 10.0.0.2 ping statistics --- 00:10:06.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.450 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:06.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:06.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:06.450 00:10:06.450 --- 10.0.0.3 ping statistics --- 00:10:06.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.450 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:06.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:10:06.450 00:10:06.450 --- 10.0.0.1 ping statistics --- 00:10:06.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.450 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=72680 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 72680 00:10:06.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 72680 ']' 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:06.450 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:06.450 [2024-05-14 22:57:18.697724] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:06.450 [2024-05-14 22:57:18.697821] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.450 [2024-05-14 22:57:18.837367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.708 [2024-05-14 22:57:18.899667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.708 [2024-05-14 22:57:18.899715] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.708 [2024-05-14 22:57:18.899726] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.708 [2024-05-14 22:57:18.899735] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.708 [2024-05-14 22:57:18.899742] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.708 [2024-05-14 22:57:18.899785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.708 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:06.708 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:10:06.708 22:57:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:06.708 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.708 22:57:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:06.708 22:57:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.708 22:57:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:06.967 [2024-05-14 22:57:19.276412] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:06.967 ************************************ 00:10:06.967 START TEST lvs_grow_clean 00:10:06.967 ************************************ 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:06.967 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:07.532 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:07.532 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:07.791 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:07.791 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:07.791 22:57:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:08.049 22:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:08.049 22:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:08.049 22:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 lvol 150 00:10:08.308 22:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6690d8e0-43c5-4240-9f57-cadd77b68f7b 00:10:08.308 22:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:08.308 22:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:08.567 [2024-05-14 22:57:20.761847] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:08.567 [2024-05-14 22:57:20.761932] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:08.567 true 00:10:08.567 22:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:08.567 22:57:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:08.826 22:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:08.826 22:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:09.085 22:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6690d8e0-43c5-4240-9f57-cadd77b68f7b 00:10:09.343 22:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:09.603 [2024-05-14 22:57:21.794179] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:09.603 [2024-05-14 22:57:21.794448] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.603 22:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:09.862 22:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72834 00:10:09.862 22:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:09.862 22:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:09.862 22:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72834 /var/tmp/bdevperf.sock 00:10:09.862 22:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 72834 ']' 00:10:09.862 22:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:09.862 22:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:09.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:09.862 22:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:09.862 22:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:09.862 22:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:09.862 [2024-05-14 22:57:22.151259] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:09.862 [2024-05-14 22:57:22.151352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72834 ] 00:10:10.121 [2024-05-14 22:57:22.285901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.121 [2024-05-14 22:57:22.346414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.057 22:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:11.057 22:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:10:11.057 22:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:11.316 Nvme0n1 00:10:11.316 22:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:11.574 [ 00:10:11.574 { 00:10:11.574 "aliases": [ 00:10:11.574 "6690d8e0-43c5-4240-9f57-cadd77b68f7b" 00:10:11.574 ], 00:10:11.574 "assigned_rate_limits": { 00:10:11.574 "r_mbytes_per_sec": 0, 00:10:11.574 "rw_ios_per_sec": 0, 00:10:11.574 "rw_mbytes_per_sec": 0, 00:10:11.574 "w_mbytes_per_sec": 0 00:10:11.574 }, 00:10:11.574 "block_size": 4096, 00:10:11.574 "claimed": false, 00:10:11.574 "driver_specific": { 00:10:11.574 "mp_policy": "active_passive", 00:10:11.574 "nvme": [ 00:10:11.574 { 00:10:11.574 "ctrlr_data": { 00:10:11.574 "ana_reporting": false, 00:10:11.574 "cntlid": 1, 00:10:11.574 "firmware_revision": "24.05", 00:10:11.574 "model_number": "SPDK bdev Controller", 00:10:11.574 "multi_ctrlr": true, 00:10:11.574 "oacs": { 00:10:11.574 "firmware": 0, 00:10:11.574 "format": 0, 00:10:11.574 "ns_manage": 0, 00:10:11.574 "security": 0 00:10:11.574 }, 00:10:11.574 "serial_number": "SPDK0", 00:10:11.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:11.574 "vendor_id": "0x8086" 00:10:11.574 }, 00:10:11.574 "ns_data": { 00:10:11.574 "can_share": true, 00:10:11.574 "id": 1 00:10:11.574 }, 00:10:11.574 "trid": { 00:10:11.574 "adrfam": "IPv4", 00:10:11.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:11.574 "traddr": "10.0.0.2", 00:10:11.574 "trsvcid": "4420", 00:10:11.574 "trtype": "TCP" 00:10:11.574 }, 00:10:11.574 "vs": { 00:10:11.574 "nvme_version": "1.3" 00:10:11.574 } 00:10:11.574 } 00:10:11.574 ] 00:10:11.574 }, 00:10:11.574 "memory_domains": [ 00:10:11.574 { 00:10:11.574 "dma_device_id": "system", 00:10:11.574 "dma_device_type": 1 00:10:11.574 } 00:10:11.574 ], 00:10:11.574 "name": "Nvme0n1", 00:10:11.574 "num_blocks": 38912, 00:10:11.574 "product_name": "NVMe disk", 00:10:11.574 "supported_io_types": { 00:10:11.574 "abort": true, 00:10:11.574 "compare": true, 00:10:11.574 "compare_and_write": true, 00:10:11.574 "flush": true, 00:10:11.574 "nvme_admin": true, 00:10:11.574 "nvme_io": true, 00:10:11.574 "read": true, 00:10:11.574 "reset": true, 00:10:11.574 "unmap": true, 00:10:11.574 "write": true, 00:10:11.574 "write_zeroes": true 00:10:11.574 }, 00:10:11.574 "uuid": "6690d8e0-43c5-4240-9f57-cadd77b68f7b", 00:10:11.574 "zoned": false 00:10:11.574 } 00:10:11.574 ] 00:10:11.574 22:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72876 00:10:11.574 22:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:11.574 22:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:11.574 Running I/O for 10 seconds... 00:10:12.511 Latency(us) 00:10:12.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.511 Nvme0n1 : 1.00 7918.00 30.93 0.00 0.00 0.00 0.00 0.00 00:10:12.511 =================================================================================================================== 00:10:12.511 Total : 7918.00 30.93 0.00 0.00 0.00 0.00 0.00 00:10:12.511 00:10:13.479 22:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:13.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.479 Nvme0n1 : 2.00 7935.50 31.00 0.00 0.00 0.00 0.00 0.00 00:10:13.479 =================================================================================================================== 00:10:13.479 Total : 7935.50 31.00 0.00 0.00 0.00 0.00 0.00 00:10:13.479 00:10:13.738 true 00:10:13.738 22:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:13.738 22:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:14.305 22:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:14.305 22:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:14.305 22:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 72876 00:10:14.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.564 Nvme0n1 : 3.00 7940.67 31.02 0.00 0.00 0.00 0.00 0.00 00:10:14.564 =================================================================================================================== 00:10:14.564 Total : 7940.67 31.02 0.00 0.00 0.00 0.00 0.00 00:10:14.564 00:10:15.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.499 Nvme0n1 : 4.00 7962.00 31.10 0.00 0.00 0.00 0.00 0.00 00:10:15.499 =================================================================================================================== 00:10:15.499 Total : 7962.00 31.10 0.00 0.00 0.00 0.00 0.00 00:10:15.499 00:10:16.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.874 Nvme0n1 : 5.00 7985.60 31.19 0.00 0.00 0.00 0.00 0.00 00:10:16.874 =================================================================================================================== 00:10:16.874 Total : 7985.60 31.19 0.00 0.00 0.00 0.00 0.00 00:10:16.874 00:10:17.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.808 Nvme0n1 : 6.00 7968.67 31.13 0.00 0.00 0.00 0.00 0.00 00:10:17.808 =================================================================================================================== 00:10:17.808 Total : 7968.67 31.13 0.00 0.00 0.00 0.00 0.00 00:10:17.808 00:10:18.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.759 Nvme0n1 : 7.00 7938.14 31.01 0.00 0.00 0.00 0.00 0.00 00:10:18.759 =================================================================================================================== 00:10:18.759 Total : 7938.14 31.01 0.00 0.00 0.00 0.00 0.00 00:10:18.759 00:10:19.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.696 Nvme0n1 : 8.00 7930.75 30.98 0.00 0.00 0.00 0.00 0.00 00:10:19.696 =================================================================================================================== 00:10:19.696 Total : 7930.75 30.98 0.00 0.00 0.00 0.00 0.00 00:10:19.696 00:10:20.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.645 Nvme0n1 : 9.00 7876.44 30.77 0.00 0.00 0.00 0.00 0.00 00:10:20.645 =================================================================================================================== 00:10:20.645 Total : 7876.44 30.77 0.00 0.00 0.00 0.00 0.00 00:10:20.645 00:10:21.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.583 Nvme0n1 : 10.00 7856.80 30.69 0.00 0.00 0.00 0.00 0.00 00:10:21.583 =================================================================================================================== 00:10:21.583 Total : 7856.80 30.69 0.00 0.00 0.00 0.00 0.00 00:10:21.583 00:10:21.583 00:10:21.583 Latency(us) 00:10:21.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.583 Nvme0n1 : 10.01 7862.43 30.71 0.00 0.00 16274.66 7923.90 49330.73 00:10:21.583 =================================================================================================================== 00:10:21.583 Total : 7862.43 30.71 0.00 0.00 16274.66 7923.90 49330.73 00:10:21.583 0 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72834 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 72834 ']' 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 72834 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72834 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:21.583 killing process with pid 72834 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72834' 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 72834 00:10:21.583 Received shutdown signal, test time was about 10.000000 seconds 00:10:21.583 00:10:21.583 Latency(us) 00:10:21.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.583 =================================================================================================================== 00:10:21.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:21.583 22:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 72834 00:10:21.843 22:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:22.103 22:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:22.362 22:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:22.362 22:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:22.620 22:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:22.620 22:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:22.621 22:57:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:22.906 [2024-05-14 22:57:35.200376] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:22.906 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:23.190 2024/05/14 22:57:35 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:b0c0d907-c21c-4514-bfa4-2a58c990fdc5], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:23.190 request: 00:10:23.190 { 00:10:23.190 "method": "bdev_lvol_get_lvstores", 00:10:23.190 "params": { 00:10:23.190 "uuid": "b0c0d907-c21c-4514-bfa4-2a58c990fdc5" 00:10:23.190 } 00:10:23.190 } 00:10:23.190 Got JSON-RPC error response 00:10:23.190 GoRPCClient: error on JSON-RPC call 00:10:23.190 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:23.190 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:23.190 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:23.190 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:23.190 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:23.450 aio_bdev 00:10:23.450 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6690d8e0-43c5-4240-9f57-cadd77b68f7b 00:10:23.450 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=6690d8e0-43c5-4240-9f57-cadd77b68f7b 00:10:23.450 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:23.450 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:10:23.450 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:23.450 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:23.450 22:57:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:24.019 22:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6690d8e0-43c5-4240-9f57-cadd77b68f7b -t 2000 00:10:24.019 [ 00:10:24.019 { 00:10:24.019 "aliases": [ 00:10:24.019 "lvs/lvol" 00:10:24.019 ], 00:10:24.019 "assigned_rate_limits": { 00:10:24.019 "r_mbytes_per_sec": 0, 00:10:24.019 "rw_ios_per_sec": 0, 00:10:24.019 "rw_mbytes_per_sec": 0, 00:10:24.019 "w_mbytes_per_sec": 0 00:10:24.019 }, 00:10:24.019 "block_size": 4096, 00:10:24.019 "claimed": false, 00:10:24.019 "driver_specific": { 00:10:24.019 "lvol": { 00:10:24.019 "base_bdev": "aio_bdev", 00:10:24.019 "clone": false, 00:10:24.019 "esnap_clone": false, 00:10:24.019 "lvol_store_uuid": "b0c0d907-c21c-4514-bfa4-2a58c990fdc5", 00:10:24.019 "num_allocated_clusters": 38, 00:10:24.019 "snapshot": false, 00:10:24.019 "thin_provision": false 00:10:24.019 } 00:10:24.019 }, 00:10:24.019 "name": "6690d8e0-43c5-4240-9f57-cadd77b68f7b", 00:10:24.019 "num_blocks": 38912, 00:10:24.019 "product_name": "Logical Volume", 00:10:24.019 "supported_io_types": { 00:10:24.019 "abort": false, 00:10:24.019 "compare": false, 00:10:24.019 "compare_and_write": false, 00:10:24.019 "flush": false, 00:10:24.019 "nvme_admin": false, 00:10:24.019 "nvme_io": false, 00:10:24.019 "read": true, 00:10:24.019 "reset": true, 00:10:24.019 "unmap": true, 00:10:24.019 "write": true, 00:10:24.019 "write_zeroes": true 00:10:24.019 }, 00:10:24.019 "uuid": "6690d8e0-43c5-4240-9f57-cadd77b68f7b", 00:10:24.019 "zoned": false 00:10:24.019 } 00:10:24.019 ] 00:10:24.279 22:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:10:24.279 22:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:24.279 22:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:24.538 22:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:24.538 22:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:24.538 22:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:24.797 22:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:24.797 22:57:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6690d8e0-43c5-4240-9f57-cadd77b68f7b 00:10:25.056 22:57:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0c0d907-c21c-4514-bfa4-2a58c990fdc5 00:10:25.314 22:57:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:25.572 22:57:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:26.138 ************************************ 00:10:26.138 END TEST lvs_grow_clean 00:10:26.138 ************************************ 00:10:26.138 00:10:26.138 real 0m18.920s 00:10:26.138 user 0m18.353s 00:10:26.138 sys 0m2.188s 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:26.138 ************************************ 00:10:26.138 START TEST lvs_grow_dirty 00:10:26.138 ************************************ 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:26.138 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:26.397 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:26.397 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:26.656 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:26.656 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:26.656 22:57:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:26.914 22:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:26.914 22:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:26.914 22:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d0dc627a-0d46-493f-bd63-aff68eca25b6 lvol 150 00:10:27.238 22:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0371503e-6588-40e3-9811-eaba4d9a10ec 00:10:27.238 22:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:27.238 22:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:27.495 [2024-05-14 22:57:39.684662] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:27.495 [2024-05-14 22:57:39.684781] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:27.495 true 00:10:27.495 22:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:27.495 22:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:27.753 22:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:27.753 22:57:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:28.011 22:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0371503e-6588-40e3-9811-eaba4d9a10ec 00:10:28.269 22:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:28.527 [2024-05-14 22:57:40.789299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.527 22:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:28.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:28.787 22:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73280 00:10:28.787 22:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:28.787 22:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:28.787 22:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73280 /var/tmp/bdevperf.sock 00:10:28.787 22:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 73280 ']' 00:10:28.787 22:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:28.787 22:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:28.787 22:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:28.787 22:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:28.787 22:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:28.787 [2024-05-14 22:57:41.098976] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:28.787 [2024-05-14 22:57:41.099075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73280 ] 00:10:29.045 [2024-05-14 22:57:41.234630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.045 [2024-05-14 22:57:41.292837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.977 22:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:29.977 22:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:10:29.977 22:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:30.235 Nvme0n1 00:10:30.235 22:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:30.493 [ 00:10:30.493 { 00:10:30.493 "aliases": [ 00:10:30.493 "0371503e-6588-40e3-9811-eaba4d9a10ec" 00:10:30.493 ], 00:10:30.493 "assigned_rate_limits": { 00:10:30.493 "r_mbytes_per_sec": 0, 00:10:30.493 "rw_ios_per_sec": 0, 00:10:30.493 "rw_mbytes_per_sec": 0, 00:10:30.493 "w_mbytes_per_sec": 0 00:10:30.493 }, 00:10:30.493 "block_size": 4096, 00:10:30.493 "claimed": false, 00:10:30.493 "driver_specific": { 00:10:30.493 "mp_policy": "active_passive", 00:10:30.493 "nvme": [ 00:10:30.493 { 00:10:30.493 "ctrlr_data": { 00:10:30.493 "ana_reporting": false, 00:10:30.493 "cntlid": 1, 00:10:30.493 "firmware_revision": "24.05", 00:10:30.493 "model_number": "SPDK bdev Controller", 00:10:30.493 "multi_ctrlr": true, 00:10:30.493 "oacs": { 00:10:30.493 "firmware": 0, 00:10:30.493 "format": 0, 00:10:30.493 "ns_manage": 0, 00:10:30.493 "security": 0 00:10:30.493 }, 00:10:30.493 "serial_number": "SPDK0", 00:10:30.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:30.493 "vendor_id": "0x8086" 00:10:30.493 }, 00:10:30.493 "ns_data": { 00:10:30.493 "can_share": true, 00:10:30.493 "id": 1 00:10:30.493 }, 00:10:30.493 "trid": { 00:10:30.493 "adrfam": "IPv4", 00:10:30.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:30.493 "traddr": "10.0.0.2", 00:10:30.493 "trsvcid": "4420", 00:10:30.493 "trtype": "TCP" 00:10:30.493 }, 00:10:30.493 "vs": { 00:10:30.493 "nvme_version": "1.3" 00:10:30.493 } 00:10:30.493 } 00:10:30.493 ] 00:10:30.493 }, 00:10:30.493 "memory_domains": [ 00:10:30.493 { 00:10:30.493 "dma_device_id": "system", 00:10:30.493 "dma_device_type": 1 00:10:30.493 } 00:10:30.493 ], 00:10:30.493 "name": "Nvme0n1", 00:10:30.493 "num_blocks": 38912, 00:10:30.493 "product_name": "NVMe disk", 00:10:30.493 "supported_io_types": { 00:10:30.493 "abort": true, 00:10:30.493 "compare": true, 00:10:30.493 "compare_and_write": true, 00:10:30.493 "flush": true, 00:10:30.493 "nvme_admin": true, 00:10:30.493 "nvme_io": true, 00:10:30.493 "read": true, 00:10:30.493 "reset": true, 00:10:30.493 "unmap": true, 00:10:30.493 "write": true, 00:10:30.493 "write_zeroes": true 00:10:30.493 }, 00:10:30.493 "uuid": "0371503e-6588-40e3-9811-eaba4d9a10ec", 00:10:30.493 "zoned": false 00:10:30.493 } 00:10:30.493 ] 00:10:30.493 22:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73333 00:10:30.493 22:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:30.493 22:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:30.751 Running I/O for 10 seconds... 00:10:31.732 Latency(us) 00:10:31.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.732 Nvme0n1 : 1.00 7710.00 30.12 0.00 0.00 0.00 0.00 0.00 00:10:31.732 =================================================================================================================== 00:10:31.732 Total : 7710.00 30.12 0.00 0.00 0.00 0.00 0.00 00:10:31.732 00:10:32.665 22:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:32.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.665 Nvme0n1 : 2.00 7720.50 30.16 0.00 0.00 0.00 0.00 0.00 00:10:32.665 =================================================================================================================== 00:10:32.665 Total : 7720.50 30.16 0.00 0.00 0.00 0.00 0.00 00:10:32.665 00:10:32.924 true 00:10:32.924 22:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:32.924 22:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:33.182 22:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:33.182 22:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:33.182 22:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 73333 00:10:33.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.749 Nvme0n1 : 3.00 7819.67 30.55 0.00 0.00 0.00 0.00 0.00 00:10:33.749 =================================================================================================================== 00:10:33.749 Total : 7819.67 30.55 0.00 0.00 0.00 0.00 0.00 00:10:33.749 00:10:34.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.682 Nvme0n1 : 4.00 7826.25 30.57 0.00 0.00 0.00 0.00 0.00 00:10:34.683 =================================================================================================================== 00:10:34.683 Total : 7826.25 30.57 0.00 0.00 0.00 0.00 0.00 00:10:34.683 00:10:35.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.618 Nvme0n1 : 5.00 7808.20 30.50 0.00 0.00 0.00 0.00 0.00 00:10:35.618 =================================================================================================================== 00:10:35.618 Total : 7808.20 30.50 0.00 0.00 0.00 0.00 0.00 00:10:35.618 00:10:36.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.566 Nvme0n1 : 6.00 7697.67 30.07 0.00 0.00 0.00 0.00 0.00 00:10:36.566 =================================================================================================================== 00:10:36.566 Total : 7697.67 30.07 0.00 0.00 0.00 0.00 0.00 00:10:36.566 00:10:37.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.941 Nvme0n1 : 7.00 7687.86 30.03 0.00 0.00 0.00 0.00 0.00 00:10:37.941 =================================================================================================================== 00:10:37.941 Total : 7687.86 30.03 0.00 0.00 0.00 0.00 0.00 00:10:37.941 00:10:38.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.876 Nvme0n1 : 8.00 7681.50 30.01 0.00 0.00 0.00 0.00 0.00 00:10:38.876 =================================================================================================================== 00:10:38.876 Total : 7681.50 30.01 0.00 0.00 0.00 0.00 0.00 00:10:38.876 00:10:39.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.809 Nvme0n1 : 9.00 7627.89 29.80 0.00 0.00 0.00 0.00 0.00 00:10:39.809 =================================================================================================================== 00:10:39.809 Total : 7627.89 29.80 0.00 0.00 0.00 0.00 0.00 00:10:39.809 00:10:40.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.746 Nvme0n1 : 10.00 7632.50 29.81 0.00 0.00 0.00 0.00 0.00 00:10:40.746 =================================================================================================================== 00:10:40.746 Total : 7632.50 29.81 0.00 0.00 0.00 0.00 0.00 00:10:40.746 00:10:40.746 00:10:40.746 Latency(us) 00:10:40.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.746 Nvme0n1 : 10.01 7635.27 29.83 0.00 0.00 16758.06 2934.23 132501.88 00:10:40.746 =================================================================================================================== 00:10:40.746 Total : 7635.27 29.83 0.00 0.00 16758.06 2934.23 132501.88 00:10:40.746 0 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73280 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 73280 ']' 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 73280 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73280 00:10:40.746 killing process with pid 73280 00:10:40.746 Received shutdown signal, test time was about 10.000000 seconds 00:10:40.746 00:10:40.746 Latency(us) 00:10:40.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.746 =================================================================================================================== 00:10:40.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73280' 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 73280 00:10:40.746 22:57:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 73280 00:10:41.005 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:41.005 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:41.266 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:41.266 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:41.533 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:41.533 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:41.533 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 72680 00:10:41.533 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 72680 00:10:41.807 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 72680 Killed "${NVMF_APP[@]}" "$@" 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=73496 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 73496 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 73496 ']' 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:41.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:41.807 22:57:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:41.808 [2024-05-14 22:57:53.983992] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:41.808 [2024-05-14 22:57:53.984102] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.808 [2024-05-14 22:57:54.124639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.808 [2024-05-14 22:57:54.194980] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.808 [2024-05-14 22:57:54.195039] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.808 [2024-05-14 22:57:54.195053] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.808 [2024-05-14 22:57:54.195063] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.808 [2024-05-14 22:57:54.195071] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.808 [2024-05-14 22:57:54.195100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.743 22:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:42.743 22:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:10:42.743 22:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.743 22:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.743 22:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:42.743 22:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.743 22:57:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:43.001 [2024-05-14 22:57:55.219642] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:43.001 [2024-05-14 22:57:55.220147] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:43.001 [2024-05-14 22:57:55.220487] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:43.001 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:43.001 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0371503e-6588-40e3-9811-eaba4d9a10ec 00:10:43.001 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=0371503e-6588-40e3-9811-eaba4d9a10ec 00:10:43.001 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:43.001 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:10:43.001 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:43.001 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:43.001 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:43.259 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0371503e-6588-40e3-9811-eaba4d9a10ec -t 2000 00:10:43.517 [ 00:10:43.517 { 00:10:43.517 "aliases": [ 00:10:43.517 "lvs/lvol" 00:10:43.517 ], 00:10:43.517 "assigned_rate_limits": { 00:10:43.517 "r_mbytes_per_sec": 0, 00:10:43.517 "rw_ios_per_sec": 0, 00:10:43.517 "rw_mbytes_per_sec": 0, 00:10:43.517 "w_mbytes_per_sec": 0 00:10:43.517 }, 00:10:43.517 "block_size": 4096, 00:10:43.517 "claimed": false, 00:10:43.517 "driver_specific": { 00:10:43.517 "lvol": { 00:10:43.517 "base_bdev": "aio_bdev", 00:10:43.517 "clone": false, 00:10:43.517 "esnap_clone": false, 00:10:43.517 "lvol_store_uuid": "d0dc627a-0d46-493f-bd63-aff68eca25b6", 00:10:43.517 "num_allocated_clusters": 38, 00:10:43.517 "snapshot": false, 00:10:43.517 "thin_provision": false 00:10:43.517 } 00:10:43.517 }, 00:10:43.517 "name": "0371503e-6588-40e3-9811-eaba4d9a10ec", 00:10:43.517 "num_blocks": 38912, 00:10:43.517 "product_name": "Logical Volume", 00:10:43.517 "supported_io_types": { 00:10:43.517 "abort": false, 00:10:43.517 "compare": false, 00:10:43.517 "compare_and_write": false, 00:10:43.517 "flush": false, 00:10:43.517 "nvme_admin": false, 00:10:43.517 "nvme_io": false, 00:10:43.517 "read": true, 00:10:43.517 "reset": true, 00:10:43.517 "unmap": true, 00:10:43.517 "write": true, 00:10:43.517 "write_zeroes": true 00:10:43.517 }, 00:10:43.517 "uuid": "0371503e-6588-40e3-9811-eaba4d9a10ec", 00:10:43.517 "zoned": false 00:10:43.517 } 00:10:43.517 ] 00:10:43.517 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:10:43.517 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:43.517 22:57:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:43.775 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:43.775 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:43.775 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:44.031 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:44.031 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:44.289 [2024-05-14 22:57:56.585615] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:44.289 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:44.548 2024/05/14 22:57:56 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:d0dc627a-0d46-493f-bd63-aff68eca25b6], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:44.548 request: 00:10:44.548 { 00:10:44.548 "method": "bdev_lvol_get_lvstores", 00:10:44.548 "params": { 00:10:44.548 "uuid": "d0dc627a-0d46-493f-bd63-aff68eca25b6" 00:10:44.548 } 00:10:44.548 } 00:10:44.548 Got JSON-RPC error response 00:10:44.548 GoRPCClient: error on JSON-RPC call 00:10:44.548 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:44.548 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:44.548 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:44.548 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:44.548 22:57:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:44.806 aio_bdev 00:10:44.806 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0371503e-6588-40e3-9811-eaba4d9a10ec 00:10:44.806 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=0371503e-6588-40e3-9811-eaba4d9a10ec 00:10:44.806 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:10:44.806 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:10:44.806 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:10:44.806 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:10:44.806 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:45.064 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0371503e-6588-40e3-9811-eaba4d9a10ec -t 2000 00:10:45.631 [ 00:10:45.631 { 00:10:45.631 "aliases": [ 00:10:45.631 "lvs/lvol" 00:10:45.631 ], 00:10:45.631 "assigned_rate_limits": { 00:10:45.631 "r_mbytes_per_sec": 0, 00:10:45.631 "rw_ios_per_sec": 0, 00:10:45.631 "rw_mbytes_per_sec": 0, 00:10:45.631 "w_mbytes_per_sec": 0 00:10:45.631 }, 00:10:45.631 "block_size": 4096, 00:10:45.631 "claimed": false, 00:10:45.631 "driver_specific": { 00:10:45.631 "lvol": { 00:10:45.631 "base_bdev": "aio_bdev", 00:10:45.631 "clone": false, 00:10:45.631 "esnap_clone": false, 00:10:45.631 "lvol_store_uuid": "d0dc627a-0d46-493f-bd63-aff68eca25b6", 00:10:45.631 "num_allocated_clusters": 38, 00:10:45.631 "snapshot": false, 00:10:45.631 "thin_provision": false 00:10:45.631 } 00:10:45.631 }, 00:10:45.631 "name": "0371503e-6588-40e3-9811-eaba4d9a10ec", 00:10:45.631 "num_blocks": 38912, 00:10:45.631 "product_name": "Logical Volume", 00:10:45.631 "supported_io_types": { 00:10:45.631 "abort": false, 00:10:45.631 "compare": false, 00:10:45.631 "compare_and_write": false, 00:10:45.631 "flush": false, 00:10:45.631 "nvme_admin": false, 00:10:45.631 "nvme_io": false, 00:10:45.631 "read": true, 00:10:45.631 "reset": true, 00:10:45.631 "unmap": true, 00:10:45.631 "write": true, 00:10:45.631 "write_zeroes": true 00:10:45.631 }, 00:10:45.631 "uuid": "0371503e-6588-40e3-9811-eaba4d9a10ec", 00:10:45.631 "zoned": false 00:10:45.631 } 00:10:45.631 ] 00:10:45.631 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:10:45.631 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:45.631 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:45.631 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:45.631 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:45.632 22:57:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:45.890 22:57:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:45.890 22:57:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0371503e-6588-40e3-9811-eaba4d9a10ec 00:10:46.481 22:57:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d0dc627a-0d46-493f-bd63-aff68eca25b6 00:10:46.481 22:57:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:46.767 22:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:47.333 ************************************ 00:10:47.333 END TEST lvs_grow_dirty 00:10:47.333 ************************************ 00:10:47.333 00:10:47.333 real 0m21.291s 00:10:47.333 user 0m43.997s 00:10:47.333 sys 0m7.901s 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:47.333 nvmf_trace.0 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:47.333 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:47.592 rmmod nvme_tcp 00:10:47.592 rmmod nvme_fabrics 00:10:47.592 rmmod nvme_keyring 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 73496 ']' 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 73496 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 73496 ']' 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 73496 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73496 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:47.592 killing process with pid 73496 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73496' 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 73496 00:10:47.592 22:57:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 73496 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:47.850 00:10:47.850 real 0m41.941s 00:10:47.850 user 1m8.956s 00:10:47.850 sys 0m10.668s 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:47.850 22:58:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 ************************************ 00:10:47.850 END TEST nvmf_lvs_grow 00:10:47.850 ************************************ 00:10:47.850 22:58:00 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:47.850 22:58:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:47.850 22:58:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:47.850 22:58:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:47.850 ************************************ 00:10:47.850 START TEST nvmf_bdev_io_wait 00:10:47.850 ************************************ 00:10:47.850 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:47.850 * Looking for test storage... 00:10:48.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:48.110 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:48.111 Cannot find device "nvmf_tgt_br" 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:48.111 Cannot find device "nvmf_tgt_br2" 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:48.111 Cannot find device "nvmf_tgt_br" 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:48.111 Cannot find device "nvmf_tgt_br2" 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:48.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:48.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:48.111 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:48.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:10:48.370 00:10:48.370 --- 10.0.0.2 ping statistics --- 00:10:48.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.370 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:48.370 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:48.370 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:10:48.370 00:10:48.370 --- 10.0.0.3 ping statistics --- 00:10:48.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.370 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:48.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:48.370 00:10:48.370 --- 10.0.0.1 ping statistics --- 00:10:48.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.370 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=73915 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 73915 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 73915 ']' 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:48.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:48.370 22:58:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:48.370 [2024-05-14 22:58:00.712290] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:48.370 [2024-05-14 22:58:00.712394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.629 [2024-05-14 22:58:00.857254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.629 [2024-05-14 22:58:00.929773] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.629 [2024-05-14 22:58:00.929832] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.629 [2024-05-14 22:58:00.929845] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.629 [2024-05-14 22:58:00.929855] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.629 [2024-05-14 22:58:00.929864] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.629 [2024-05-14 22:58:00.929940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.629 [2024-05-14 22:58:00.930051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.629 [2024-05-14 22:58:00.930705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.629 [2024-05-14 22:58:00.930739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.566 [2024-05-14 22:58:01.813885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.566 Malloc0 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.566 [2024-05-14 22:58:01.865141] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:49.566 [2024-05-14 22:58:01.865440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73969 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=73971 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:49.566 { 00:10:49.566 "params": { 00:10:49.566 "name": "Nvme$subsystem", 00:10:49.566 "trtype": "$TEST_TRANSPORT", 00:10:49.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.566 "adrfam": "ipv4", 00:10:49.566 "trsvcid": "$NVMF_PORT", 00:10:49.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.566 "hdgst": ${hdgst:-false}, 00:10:49.566 "ddgst": ${ddgst:-false} 00:10:49.566 }, 00:10:49.566 "method": "bdev_nvme_attach_controller" 00:10:49.566 } 00:10:49.566 EOF 00:10:49.566 )") 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73973 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:49.566 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:49.567 { 00:10:49.567 "params": { 00:10:49.567 "name": "Nvme$subsystem", 00:10:49.567 "trtype": "$TEST_TRANSPORT", 00:10:49.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.567 "adrfam": "ipv4", 00:10:49.567 "trsvcid": "$NVMF_PORT", 00:10:49.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.567 "hdgst": ${hdgst:-false}, 00:10:49.567 "ddgst": ${ddgst:-false} 00:10:49.567 }, 00:10:49.567 "method": "bdev_nvme_attach_controller" 00:10:49.567 } 00:10:49.567 EOF 00:10:49.567 )") 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73976 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:49.567 { 00:10:49.567 "params": { 00:10:49.567 "name": "Nvme$subsystem", 00:10:49.567 "trtype": "$TEST_TRANSPORT", 00:10:49.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.567 "adrfam": "ipv4", 00:10:49.567 "trsvcid": "$NVMF_PORT", 00:10:49.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.567 "hdgst": ${hdgst:-false}, 00:10:49.567 "ddgst": ${ddgst:-false} 00:10:49.567 }, 00:10:49.567 "method": "bdev_nvme_attach_controller" 00:10:49.567 } 00:10:49.567 EOF 00:10:49.567 )") 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:49.567 { 00:10:49.567 "params": { 00:10:49.567 "name": "Nvme$subsystem", 00:10:49.567 "trtype": "$TEST_TRANSPORT", 00:10:49.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.567 "adrfam": "ipv4", 00:10:49.567 "trsvcid": "$NVMF_PORT", 00:10:49.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.567 "hdgst": ${hdgst:-false}, 00:10:49.567 "ddgst": ${ddgst:-false} 00:10:49.567 }, 00:10:49.567 "method": "bdev_nvme_attach_controller" 00:10:49.567 } 00:10:49.567 EOF 00:10:49.567 )") 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:49.567 "params": { 00:10:49.567 "name": "Nvme1", 00:10:49.567 "trtype": "tcp", 00:10:49.567 "traddr": "10.0.0.2", 00:10:49.567 "adrfam": "ipv4", 00:10:49.567 "trsvcid": "4420", 00:10:49.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.567 "hdgst": false, 00:10:49.567 "ddgst": false 00:10:49.567 }, 00:10:49.567 "method": "bdev_nvme_attach_controller" 00:10:49.567 }' 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:49.567 "params": { 00:10:49.567 "name": "Nvme1", 00:10:49.567 "trtype": "tcp", 00:10:49.567 "traddr": "10.0.0.2", 00:10:49.567 "adrfam": "ipv4", 00:10:49.567 "trsvcid": "4420", 00:10:49.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.567 "hdgst": false, 00:10:49.567 "ddgst": false 00:10:49.567 }, 00:10:49.567 "method": "bdev_nvme_attach_controller" 00:10:49.567 }' 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:49.567 "params": { 00:10:49.567 "name": "Nvme1", 00:10:49.567 "trtype": "tcp", 00:10:49.567 "traddr": "10.0.0.2", 00:10:49.567 "adrfam": "ipv4", 00:10:49.567 "trsvcid": "4420", 00:10:49.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.567 "hdgst": false, 00:10:49.567 "ddgst": false 00:10:49.567 }, 00:10:49.567 "method": "bdev_nvme_attach_controller" 00:10:49.567 }' 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:49.567 "params": { 00:10:49.567 "name": "Nvme1", 00:10:49.567 "trtype": "tcp", 00:10:49.567 "traddr": "10.0.0.2", 00:10:49.567 "adrfam": "ipv4", 00:10:49.567 "trsvcid": "4420", 00:10:49.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.567 "hdgst": false, 00:10:49.567 "ddgst": false 00:10:49.567 }, 00:10:49.567 "method": "bdev_nvme_attach_controller" 00:10:49.567 }' 00:10:49.567 22:58:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 73969 00:10:49.567 [2024-05-14 22:58:01.930860] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:49.567 [2024-05-14 22:58:01.930942] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:49.567 [2024-05-14 22:58:01.931567] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:49.567 [2024-05-14 22:58:01.931632] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:49.567 [2024-05-14 22:58:01.944271] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:49.567 [2024-05-14 22:58:01.944630] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:49.567 [2024-05-14 22:58:01.949216] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:49.567 [2024-05-14 22:58:01.949865] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:49.826 [2024-05-14 22:58:02.112375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.826 [2024-05-14 22:58:02.152118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.826 [2024-05-14 22:58:02.167393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:49.826 [2024-05-14 22:58:02.195855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.826 [2024-05-14 22:58:02.207232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:50.084 [2024-05-14 22:58:02.240323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.084 [2024-05-14 22:58:02.250856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:50.084 [2024-05-14 22:58:02.286805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:10:50.084 Running I/O for 1 seconds... 00:10:50.084 Running I/O for 1 seconds... 00:10:50.084 Running I/O for 1 seconds... 00:10:50.084 Running I/O for 1 seconds... 00:10:51.017 00:10:51.017 Latency(us) 00:10:51.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.017 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:51.017 Nvme1n1 : 1.00 163914.53 640.29 0.00 0.00 777.74 292.31 1779.90 00:10:51.017 =================================================================================================================== 00:10:51.017 Total : 163914.53 640.29 0.00 0.00 777.74 292.31 1779.90 00:10:51.017 00:10:51.017 Latency(us) 00:10:51.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.017 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:51.017 Nvme1n1 : 1.01 9209.14 35.97 0.00 0.00 13829.69 9055.88 21209.83 00:10:51.017 =================================================================================================================== 00:10:51.017 Total : 9209.14 35.97 0.00 0.00 13829.69 9055.88 21209.83 00:10:51.017 00:10:51.017 Latency(us) 00:10:51.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.017 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:51.017 Nvme1n1 : 1.01 7774.37 30.37 0.00 0.00 16382.37 5421.61 23592.96 00:10:51.017 =================================================================================================================== 00:10:51.017 Total : 7774.37 30.37 0.00 0.00 16382.37 5421.61 23592.96 00:10:51.274 00:10:51.274 Latency(us) 00:10:51.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.274 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:51.274 Nvme1n1 : 1.01 8389.92 32.77 0.00 0.00 15195.61 4974.78 24546.21 00:10:51.274 =================================================================================================================== 00:10:51.274 Total : 8389.92 32.77 0.00 0.00 15195.61 4974.78 24546.21 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 73971 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 73973 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 73976 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:51.274 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:51.533 rmmod nvme_tcp 00:10:51.533 rmmod nvme_fabrics 00:10:51.533 rmmod nvme_keyring 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 73915 ']' 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 73915 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 73915 ']' 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 73915 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73915 00:10:51.533 killing process with pid 73915 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73915' 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 73915 00:10:51.533 [2024-05-14 22:58:03.734446] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:51.533 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 73915 00:10:51.534 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.534 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.534 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.534 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.534 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.534 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.534 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.534 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.794 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:51.794 00:10:51.794 real 0m3.785s 00:10:51.794 user 0m16.354s 00:10:51.794 sys 0m1.954s 00:10:51.794 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:51.794 22:58:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.794 ************************************ 00:10:51.794 END TEST nvmf_bdev_io_wait 00:10:51.794 ************************************ 00:10:51.794 22:58:04 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:51.794 22:58:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:51.794 22:58:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:51.794 22:58:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:51.794 ************************************ 00:10:51.794 START TEST nvmf_queue_depth 00:10:51.794 ************************************ 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:51.794 * Looking for test storage... 00:10:51.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.794 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:51.795 Cannot find device "nvmf_tgt_br" 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:51.795 Cannot find device "nvmf_tgt_br2" 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:51.795 Cannot find device "nvmf_tgt_br" 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:51.795 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:52.057 Cannot find device "nvmf_tgt_br2" 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:52.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:52.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:52.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:10:52.057 00:10:52.057 --- 10.0.0.2 ping statistics --- 00:10:52.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.057 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:52.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:52.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:52.057 00:10:52.057 --- 10.0.0.3 ping statistics --- 00:10:52.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.057 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:52.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:52.057 00:10:52.057 --- 10.0.0.1 ping statistics --- 00:10:52.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.057 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=74202 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 74202 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 74202 ']' 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:52.057 22:58:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:52.316 [2024-05-14 22:58:04.494901] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:52.316 [2024-05-14 22:58:04.494989] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.316 [2024-05-14 22:58:04.627057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.575 [2024-05-14 22:58:04.711515] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.575 [2024-05-14 22:58:04.711568] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.575 [2024-05-14 22:58:04.711581] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.575 [2024-05-14 22:58:04.711590] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.575 [2024-05-14 22:58:04.711598] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.575 [2024-05-14 22:58:04.711621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 [2024-05-14 22:58:05.587346] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 Malloc0 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 [2024-05-14 22:58:05.641656] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:53.511 [2024-05-14 22:58:05.641881] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=74252 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 74252 /var/tmp/bdevperf.sock 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 74252 ']' 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:53.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:53.511 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 [2024-05-14 22:58:05.690736] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:53.511 [2024-05-14 22:58:05.690833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74252 ] 00:10:53.511 [2024-05-14 22:58:05.828714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.511 [2024-05-14 22:58:05.899724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.770 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:53.770 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:10:53.770 22:58:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:53.770 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.770 22:58:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.770 NVMe0n1 00:10:53.770 22:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.770 22:58:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:54.029 Running I/O for 10 seconds... 00:11:04.001 00:11:04.001 Latency(us) 00:11:04.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.002 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:04.002 Verification LBA range: start 0x0 length 0x4000 00:11:04.002 NVMe0n1 : 10.08 8359.79 32.66 0.00 0.00 121880.37 26929.34 116296.61 00:11:04.002 =================================================================================================================== 00:11:04.002 Total : 8359.79 32.66 0.00 0.00 121880.37 26929.34 116296.61 00:11:04.002 0 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 74252 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 74252 ']' 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 74252 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74252 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:04.002 killing process with pid 74252 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74252' 00:11:04.002 Received shutdown signal, test time was about 10.000000 seconds 00:11:04.002 00:11:04.002 Latency(us) 00:11:04.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.002 =================================================================================================================== 00:11:04.002 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 74252 00:11:04.002 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 74252 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:04.261 rmmod nvme_tcp 00:11:04.261 rmmod nvme_fabrics 00:11:04.261 rmmod nvme_keyring 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 74202 ']' 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 74202 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 74202 ']' 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 74202 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74202 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:04.261 killing process with pid 74202 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74202' 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 74202 00:11:04.261 [2024-05-14 22:58:16.619279] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:04.261 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 74202 00:11:04.519 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.519 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.519 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.519 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.519 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.519 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.519 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.519 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.519 22:58:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:04.519 00:11:04.519 real 0m12.829s 00:11:04.519 user 0m22.065s 00:11:04.520 sys 0m1.865s 00:11:04.520 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.520 22:58:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:04.520 ************************************ 00:11:04.520 END TEST nvmf_queue_depth 00:11:04.520 ************************************ 00:11:04.520 22:58:16 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:04.520 22:58:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:04.520 22:58:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.520 22:58:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:04.520 ************************************ 00:11:04.520 START TEST nvmf_target_multipath 00:11:04.520 ************************************ 00:11:04.520 22:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:04.778 * Looking for test storage... 00:11:04.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.778 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:04.779 22:58:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:04.779 Cannot find device "nvmf_tgt_br" 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:04.779 Cannot find device "nvmf_tgt_br2" 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:04.779 Cannot find device "nvmf_tgt_br" 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:04.779 Cannot find device "nvmf_tgt_br2" 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:04.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.779 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:04.779 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:05.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:11:05.039 00:11:05.039 --- 10.0.0.2 ping statistics --- 00:11:05.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.039 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:05.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:05.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:11:05.039 00:11:05.039 --- 10.0.0.3 ping statistics --- 00:11:05.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.039 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:05.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:11:05.039 00:11:05.039 --- 10.0.0.1 ping statistics --- 00:11:05.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.039 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=74572 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 74572 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 74572 ']' 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:05.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:05.039 22:58:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:05.039 [2024-05-14 22:58:17.368865] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:05.039 [2024-05-14 22:58:17.368969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.298 [2024-05-14 22:58:17.513989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.298 [2024-05-14 22:58:17.583897] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.298 [2024-05-14 22:58:17.583951] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.298 [2024-05-14 22:58:17.583963] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.298 [2024-05-14 22:58:17.583972] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.298 [2024-05-14 22:58:17.583980] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.298 [2024-05-14 22:58:17.584274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.298 [2024-05-14 22:58:17.584386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.298 [2024-05-14 22:58:17.584486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.298 [2024-05-14 22:58:17.584476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.231 22:58:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:06.231 22:58:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:11:06.231 22:58:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.231 22:58:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.231 22:58:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:06.231 22:58:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.231 22:58:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:06.231 [2024-05-14 22:58:18.576549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.231 22:58:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:06.797 Malloc0 00:11:06.797 22:58:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:06.797 22:58:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.056 22:58:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.314 [2024-05-14 22:58:19.657455] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:07.314 [2024-05-14 22:58:19.658087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.314 22:58:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:07.572 [2024-05-14 22:58:19.950019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:07.830 22:58:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:11:07.830 22:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:08.088 22:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.088 22:58:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:11:08.088 22:58:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.088 22:58:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:08.088 22:58:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:11:10.619 22:58:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:10.619 22:58:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:10.619 22:58:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.619 22:58:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=74710 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:10.620 22:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:10.620 [global] 00:11:10.620 thread=1 00:11:10.620 invalidate=1 00:11:10.620 rw=randrw 00:11:10.620 time_based=1 00:11:10.620 runtime=6 00:11:10.620 ioengine=libaio 00:11:10.620 direct=1 00:11:10.620 bs=4096 00:11:10.620 iodepth=128 00:11:10.620 norandommap=0 00:11:10.620 numjobs=1 00:11:10.620 00:11:10.620 verify_dump=1 00:11:10.620 verify_backlog=512 00:11:10.620 verify_state_save=0 00:11:10.620 do_verify=1 00:11:10.620 verify=crc32c-intel 00:11:10.620 [job0] 00:11:10.620 filename=/dev/nvme0n1 00:11:10.620 Could not set queue depth (nvme0n1) 00:11:10.620 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.620 fio-3.35 00:11:10.620 Starting 1 thread 00:11:11.184 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:11.441 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:11.699 22:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:12.631 22:58:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:12.631 22:58:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:12.631 22:58:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:12.631 22:58:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:12.889 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:13.450 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:13.450 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:13.450 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:13.450 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:13.450 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:13.450 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:13.450 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:13.450 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:13.451 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:13.451 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:13.451 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:13.451 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:13.451 22:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:14.381 22:58:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:14.381 22:58:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:14.381 22:58:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:14.381 22:58:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 74710 00:11:16.923 00:11:16.923 job0: (groupid=0, jobs=1): err= 0: pid=74736: Tue May 14 22:58:28 2024 00:11:16.923 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(242MiB/6005msec) 00:11:16.923 slat (usec): min=2, max=7479, avg=55.10, stdev=253.69 00:11:16.923 clat (usec): min=778, max=56570, avg=8473.37, stdev=2029.54 00:11:16.923 lat (usec): min=798, max=56582, avg=8528.47, stdev=2038.61 00:11:16.923 clat percentiles (usec): 00:11:16.923 | 1.00th=[ 4752], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7439], 00:11:16.923 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8586], 00:11:16.923 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[10159], 95.00th=[11207], 00:11:16.923 | 99.00th=[12780], 99.50th=[13566], 99.90th=[47449], 99.95th=[52691], 00:11:16.923 | 99.99th=[56361] 00:11:16.923 bw ( KiB/s): min= 4264, max=28448, per=51.14%, avg=21120.82, stdev=8188.48, samples=11 00:11:16.923 iops : min= 1066, max= 7112, avg=5280.18, stdev=2047.11, samples=11 00:11:16.923 write: IOPS=6276, BW=24.5MiB/s (25.7MB/s)(126MiB/5155msec); 0 zone resets 00:11:16.923 slat (usec): min=3, max=5319, avg=67.71, stdev=171.99 00:11:16.923 clat (usec): min=438, max=56536, avg=7290.30, stdev=2296.64 00:11:16.923 lat (usec): min=500, max=56568, avg=7358.01, stdev=2300.63 00:11:16.923 clat percentiles (usec): 00:11:16.923 | 1.00th=[ 3752], 5.00th=[ 5145], 10.00th=[ 6063], 20.00th=[ 6521], 00:11:16.923 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7373], 00:11:16.923 | 70.00th=[ 7570], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[ 9241], 00:11:16.923 | 99.00th=[10814], 99.50th=[12256], 99.90th=[52691], 99.95th=[55313], 00:11:16.923 | 99.99th=[56361] 00:11:16.923 bw ( KiB/s): min= 4216, max=27768, per=84.53%, avg=21223.82, stdev=8061.49, samples=11 00:11:16.923 iops : min= 1054, max= 6942, avg=5305.91, stdev=2015.35, samples=11 00:11:16.923 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:16.923 lat (msec) : 2=0.06%, 4=0.61%, 10=91.10%, 20=8.09%, 50=0.06% 00:11:16.923 lat (msec) : 100=0.08% 00:11:16.923 cpu : usr=5.11%, sys=22.83%, ctx=6160, majf=0, minf=133 00:11:16.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:16.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.923 issued rwts: total=61998,32357,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.923 00:11:16.923 Run status group 0 (all jobs): 00:11:16.923 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=242MiB (254MB), run=6005-6005msec 00:11:16.923 WRITE: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=126MiB (133MB), run=5155-5155msec 00:11:16.923 00:11:16.923 Disk stats (read/write): 00:11:16.923 nvme0n1: ios=61093/31708, merge=0/0, ticks=481453/212537, in_queue=693990, util=98.63% 00:11:16.923 22:58:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:16.923 22:58:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:11:16.923 22:58:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:17.855 22:58:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:17.855 22:58:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:17.855 22:58:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:17.856 22:58:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:17.856 22:58:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=74864 00:11:17.856 22:58:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:17.856 22:58:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:17.856 [global] 00:11:17.856 thread=1 00:11:17.856 invalidate=1 00:11:17.856 rw=randrw 00:11:17.856 time_based=1 00:11:17.856 runtime=6 00:11:17.856 ioengine=libaio 00:11:17.856 direct=1 00:11:17.856 bs=4096 00:11:17.856 iodepth=128 00:11:17.856 norandommap=0 00:11:17.856 numjobs=1 00:11:17.856 00:11:17.856 verify_dump=1 00:11:17.856 verify_backlog=512 00:11:17.856 verify_state_save=0 00:11:17.856 do_verify=1 00:11:17.856 verify=crc32c-intel 00:11:17.856 [job0] 00:11:17.856 filename=/dev/nvme0n1 00:11:17.856 Could not set queue depth (nvme0n1) 00:11:18.113 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:18.113 fio-3.35 00:11:18.113 Starting 1 thread 00:11:19.046 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:19.304 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:19.562 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:19.563 22:58:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:20.506 22:58:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:20.506 22:58:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:20.506 22:58:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:20.506 22:58:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:20.768 22:58:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:21.026 22:58:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:22.005 22:58:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:22.005 22:58:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:22.005 22:58:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:22.005 22:58:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 74864 00:11:24.535 00:11:24.535 job0: (groupid=0, jobs=1): err= 0: pid=74889: Tue May 14 22:58:36 2024 00:11:24.535 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(273MiB/6003msec) 00:11:24.535 slat (usec): min=3, max=5737, avg=42.55, stdev=204.58 00:11:24.535 clat (usec): min=222, max=14970, avg=7494.27, stdev=1731.21 00:11:24.535 lat (usec): min=245, max=14992, avg=7536.83, stdev=1748.50 00:11:24.535 clat percentiles (usec): 00:11:24.535 | 1.00th=[ 3163], 5.00th=[ 4490], 10.00th=[ 5080], 20.00th=[ 6063], 00:11:24.535 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7832], 00:11:24.535 | 70.00th=[ 8291], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10290], 00:11:24.535 | 99.00th=[11731], 99.50th=[12125], 99.90th=[13304], 99.95th=[13698], 00:11:24.535 | 99.99th=[14222] 00:11:24.535 bw ( KiB/s): min= 8704, max=40208, per=54.31%, avg=25294.55, stdev=9265.72, samples=11 00:11:24.535 iops : min= 2176, max=10052, avg=6323.64, stdev=2316.43, samples=11 00:11:24.535 write: IOPS=6997, BW=27.3MiB/s (28.7MB/s)(148MiB/5412msec); 0 zone resets 00:11:24.535 slat (usec): min=13, max=1958, avg=55.55, stdev=135.11 00:11:24.535 clat (usec): min=374, max=14777, avg=6289.19, stdev=1696.52 00:11:24.535 lat (usec): min=442, max=14799, avg=6344.74, stdev=1711.91 00:11:24.535 clat percentiles (usec): 00:11:24.535 | 1.00th=[ 2573], 5.00th=[ 3359], 10.00th=[ 3818], 20.00th=[ 4490], 00:11:24.535 | 30.00th=[ 5407], 40.00th=[ 6259], 50.00th=[ 6652], 60.00th=[ 6980], 00:11:24.535 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8160], 95.00th=[ 8848], 00:11:24.535 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[11994], 99.95th=[12256], 00:11:24.535 | 99.99th=[13042] 00:11:24.535 bw ( KiB/s): min= 9136, max=39264, per=90.24%, avg=25258.18, stdev=8997.39, samples=11 00:11:24.535 iops : min= 2284, max= 9816, avg=6314.55, stdev=2249.35, samples=11 00:11:24.535 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:24.535 lat (msec) : 2=0.12%, 4=6.15%, 10=89.28%, 20=4.41% 00:11:24.535 cpu : usr=6.40%, sys=25.51%, ctx=7826, majf=0, minf=108 00:11:24.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:24.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.535 issued rwts: total=69895,37872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.535 00:11:24.535 Run status group 0 (all jobs): 00:11:24.535 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=273MiB (286MB), run=6003-6003msec 00:11:24.535 WRITE: bw=27.3MiB/s (28.7MB/s), 27.3MiB/s-27.3MiB/s (28.7MB/s-28.7MB/s), io=148MiB (155MB), run=5412-5412msec 00:11:24.535 00:11:24.535 Disk stats (read/write): 00:11:24.535 nvme0n1: ios=68866/37351, merge=0/0, ticks=476427/212486, in_queue=688913, util=98.66% 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.535 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:24.536 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.536 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:24.536 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.536 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.536 rmmod nvme_tcp 00:11:24.536 rmmod nvme_fabrics 00:11:24.536 rmmod nvme_keyring 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 74572 ']' 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 74572 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 74572 ']' 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 74572 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74572 00:11:24.793 killing process with pid 74572 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74572' 00:11:24.793 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 74572 00:11:24.794 [2024-05-14 22:58:36.961007] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:24.794 22:58:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 74572 00:11:24.794 22:58:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.794 22:58:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.794 22:58:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.794 22:58:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.794 22:58:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.794 22:58:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.794 22:58:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.794 22:58:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.052 22:58:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:25.052 00:11:25.052 real 0m20.325s 00:11:25.052 user 1m20.003s 00:11:25.052 sys 0m6.482s 00:11:25.052 22:58:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:25.052 22:58:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:25.052 ************************************ 00:11:25.052 END TEST nvmf_target_multipath 00:11:25.052 ************************************ 00:11:25.052 22:58:37 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:25.052 22:58:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:25.052 22:58:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:25.052 22:58:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:25.052 ************************************ 00:11:25.052 START TEST nvmf_zcopy 00:11:25.052 ************************************ 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:25.052 * Looking for test storage... 00:11:25.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.052 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:25.053 Cannot find device "nvmf_tgt_br" 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:25.053 Cannot find device "nvmf_tgt_br2" 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:25.053 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:25.311 Cannot find device "nvmf_tgt_br" 00:11:25.311 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:25.312 Cannot find device "nvmf_tgt_br2" 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:25.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:25.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:25.312 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:25.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:11:25.570 00:11:25.570 --- 10.0.0.2 ping statistics --- 00:11:25.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.570 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:25.570 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:25.570 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:25.570 00:11:25.570 --- 10.0.0.3 ping statistics --- 00:11:25.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.570 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:25.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:25.570 00:11:25.570 --- 10.0.0.1 ping statistics --- 00:11:25.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.570 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=75163 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 75163 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 75163 ']' 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:25.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:25.570 22:58:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:25.570 [2024-05-14 22:58:37.804980] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:25.570 [2024-05-14 22:58:37.805072] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.570 [2024-05-14 22:58:37.941428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.829 [2024-05-14 22:58:38.004497] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.829 [2024-05-14 22:58:38.004589] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.829 [2024-05-14 22:58:38.004604] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.829 [2024-05-14 22:58:38.004612] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.829 [2024-05-14 22:58:38.004620] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.829 [2024-05-14 22:58:38.004647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:26.763 [2024-05-14 22:58:38.887930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:26.763 [2024-05-14 22:58:38.903891] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:26.763 [2024-05-14 22:58:38.904231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:26.763 malloc0 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:26.763 { 00:11:26.763 "params": { 00:11:26.763 "name": "Nvme$subsystem", 00:11:26.763 "trtype": "$TEST_TRANSPORT", 00:11:26.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:26.763 "adrfam": "ipv4", 00:11:26.763 "trsvcid": "$NVMF_PORT", 00:11:26.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:26.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:26.763 "hdgst": ${hdgst:-false}, 00:11:26.763 "ddgst": ${ddgst:-false} 00:11:26.763 }, 00:11:26.763 "method": "bdev_nvme_attach_controller" 00:11:26.763 } 00:11:26.763 EOF 00:11:26.763 )") 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:26.763 22:58:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:26.763 "params": { 00:11:26.763 "name": "Nvme1", 00:11:26.763 "trtype": "tcp", 00:11:26.763 "traddr": "10.0.0.2", 00:11:26.763 "adrfam": "ipv4", 00:11:26.763 "trsvcid": "4420", 00:11:26.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:26.763 "hdgst": false, 00:11:26.763 "ddgst": false 00:11:26.763 }, 00:11:26.763 "method": "bdev_nvme_attach_controller" 00:11:26.763 }' 00:11:26.763 [2024-05-14 22:58:38.996462] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:26.763 [2024-05-14 22:58:38.996558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75214 ] 00:11:26.763 [2024-05-14 22:58:39.131283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.085 [2024-05-14 22:58:39.200823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.085 Running I/O for 10 seconds... 00:11:37.063 00:11:37.063 Latency(us) 00:11:37.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.063 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:37.063 Verification LBA range: start 0x0 length 0x1000 00:11:37.063 Nvme1n1 : 10.02 5903.65 46.12 0.00 0.00 21609.41 2398.02 34555.35 00:11:37.063 =================================================================================================================== 00:11:37.063 Total : 5903.65 46.12 0.00 0.00 21609.41 2398.02 34555.35 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=75338 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:37.321 { 00:11:37.321 "params": { 00:11:37.321 "name": "Nvme$subsystem", 00:11:37.321 "trtype": "$TEST_TRANSPORT", 00:11:37.321 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.321 "adrfam": "ipv4", 00:11:37.321 "trsvcid": "$NVMF_PORT", 00:11:37.321 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.321 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.321 "hdgst": ${hdgst:-false}, 00:11:37.321 "ddgst": ${ddgst:-false} 00:11:37.321 }, 00:11:37.321 "method": "bdev_nvme_attach_controller" 00:11:37.321 } 00:11:37.321 EOF 00:11:37.321 )") 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:37.321 [2024-05-14 22:58:49.551731] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.551779] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:37.321 22:58:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:37.321 "params": { 00:11:37.321 "name": "Nvme1", 00:11:37.321 "trtype": "tcp", 00:11:37.321 "traddr": "10.0.0.2", 00:11:37.321 "adrfam": "ipv4", 00:11:37.321 "trsvcid": "4420", 00:11:37.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:37.321 "hdgst": false, 00:11:37.321 "ddgst": false 00:11:37.321 }, 00:11:37.321 "method": "bdev_nvme_attach_controller" 00:11:37.321 }' 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.563702] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.563734] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.575704] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.575739] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.585026] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:37.321 [2024-05-14 22:58:49.585095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75338 ] 00:11:37.321 [2024-05-14 22:58:49.587701] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.587728] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.599704] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.599732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.611722] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.611754] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.623728] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.623784] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.635714] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.635743] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.647726] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.647754] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.659721] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.659748] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.671756] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.671814] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.683757] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.683815] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.695731] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.695757] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.321 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.321 [2024-05-14 22:58:49.707737] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.321 [2024-05-14 22:58:49.707774] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.579 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.579 [2024-05-14 22:58:49.717168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.579 [2024-05-14 22:58:49.719750] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.579 [2024-05-14 22:58:49.719787] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.579 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.731779] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.731813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.743751] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.743788] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.755793] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.755836] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.767759] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.767799] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.774915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.580 [2024-05-14 22:58:49.779757] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.779797] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.791820] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.791867] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.803807] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.803847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.815806] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.815847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.827803] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.827840] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.839785] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.839812] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.851858] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.851894] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.863838] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.863869] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.875853] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.875883] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.887894] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.887930] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.899853] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.899881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.911892] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.911937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 Running I/O for 5 seconds... 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.923939] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.923983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.941295] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.941352] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.580 [2024-05-14 22:58:49.957173] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.580 [2024-05-14 22:58:49.957212] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.580 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.838 [2024-05-14 22:58:49.974238] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.838 [2024-05-14 22:58:49.974280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.838 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.838 [2024-05-14 22:58:49.989699] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.838 [2024-05-14 22:58:49.989740] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.838 2024/05/14 22:58:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.838 [2024-05-14 22:58:50.005896] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.838 [2024-05-14 22:58:50.005938] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.838 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.838 [2024-05-14 22:58:50.022579] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.838 [2024-05-14 22:58:50.022628] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.838 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.838 [2024-05-14 22:58:50.038457] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.838 [2024-05-14 22:58:50.038501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.838 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.048295] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.048331] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.064430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.064474] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.080394] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.080433] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.096504] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.096554] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.106834] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.106877] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.121988] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.122047] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.132868] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.132923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.147518] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.147557] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.163496] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.163535] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.181473] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.181513] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.195820] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.195859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.211595] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.211634] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.839 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:37.839 [2024-05-14 22:58:50.228387] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.839 [2024-05-14 22:58:50.228426] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.097 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.097 [2024-05-14 22:58:50.243898] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.097 [2024-05-14 22:58:50.243944] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.097 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.097 [2024-05-14 22:58:50.260909] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.097 [2024-05-14 22:58:50.260949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.097 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.097 [2024-05-14 22:58:50.276627] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.097 [2024-05-14 22:58:50.276666] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.097 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.097 [2024-05-14 22:58:50.286401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.097 [2024-05-14 22:58:50.286436] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.097 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.097 [2024-05-14 22:58:50.300754] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.097 [2024-05-14 22:58:50.300807] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.097 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.097 [2024-05-14 22:58:50.315784] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.097 [2024-05-14 22:58:50.315823] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.097 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.097 [2024-05-14 22:58:50.326392] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.097 [2024-05-14 22:58:50.326428] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.097 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.097 [2024-05-14 22:58:50.337440] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.097 [2024-05-14 22:58:50.337476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.098 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.098 [2024-05-14 22:58:50.354581] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.098 [2024-05-14 22:58:50.354621] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.098 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.098 [2024-05-14 22:58:50.370385] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.098 [2024-05-14 22:58:50.370431] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.098 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.098 [2024-05-14 22:58:50.385858] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.098 [2024-05-14 22:58:50.385900] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.098 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.098 [2024-05-14 22:58:50.396230] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.098 [2024-05-14 22:58:50.396271] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.098 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.098 [2024-05-14 22:58:50.411314] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.098 [2024-05-14 22:58:50.411381] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.098 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.098 [2024-05-14 22:58:50.428583] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.098 [2024-05-14 22:58:50.428650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.098 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.098 [2024-05-14 22:58:50.444502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.098 [2024-05-14 22:58:50.444579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.098 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.098 [2024-05-14 22:58:50.462207] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.098 [2024-05-14 22:58:50.462262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.098 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.098 [2024-05-14 22:58:50.477638] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.098 [2024-05-14 22:58:50.477681] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.098 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.488306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.488363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.503223] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.503272] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.519306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.519348] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.531578] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.531638] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.548510] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.548566] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.564214] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.564254] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.581249] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.581296] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.591835] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.591874] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.603170] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.603210] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.618551] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.618590] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.635298] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.635336] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.651111] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.651149] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.669612] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.669651] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.684839] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.684881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.357 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.357 [2024-05-14 22:58:50.695396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.357 [2024-05-14 22:58:50.695435] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.358 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.358 [2024-05-14 22:58:50.706599] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.358 [2024-05-14 22:58:50.706641] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.358 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.358 [2024-05-14 22:58:50.724758] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.358 [2024-05-14 22:58:50.724808] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.358 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.358 [2024-05-14 22:58:50.740357] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.358 [2024-05-14 22:58:50.740395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.358 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.757318] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.757362] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.617 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.773987] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.774024] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.617 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.790800] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.790841] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.617 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.806571] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.806616] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.617 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.817243] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.817288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.617 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.832147] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.832187] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.617 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.849585] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.849642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.617 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.865082] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.865135] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.617 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.875979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.876039] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.617 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.890730] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.890803] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.617 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.617 [2024-05-14 22:58:50.901558] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.617 [2024-05-14 22:58:50.901598] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.618 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.618 [2024-05-14 22:58:50.916141] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.618 [2024-05-14 22:58:50.916181] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.618 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.618 [2024-05-14 22:58:50.926666] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.618 [2024-05-14 22:58:50.926703] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.618 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.618 [2024-05-14 22:58:50.941112] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.618 [2024-05-14 22:58:50.941173] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.618 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.618 [2024-05-14 22:58:50.956795] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.618 [2024-05-14 22:58:50.956847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.618 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.618 [2024-05-14 22:58:50.972575] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.618 [2024-05-14 22:58:50.972628] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.618 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.618 [2024-05-14 22:58:50.983315] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.618 [2024-05-14 22:58:50.983374] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.618 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.618 [2024-05-14 22:58:50.998162] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.618 [2024-05-14 22:58:50.998232] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.618 2024/05/14 22:58:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.008820] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.008864] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.023842] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.023902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.034421] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.034470] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.049329] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.049398] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.060057] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.060109] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.071445] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.071486] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.083724] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.083775] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.093891] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.093931] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.108172] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.108210] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.123793] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.123829] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.142070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.142107] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.156883] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.156921] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.167422] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.167459] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.181831] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.181867] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.197140] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.197179] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.209327] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.209367] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.227074] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.227139] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.242534] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.242586] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:38.877 [2024-05-14 22:58:51.258942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.877 [2024-05-14 22:58:51.258982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.877 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.136 [2024-05-14 22:58:51.275507] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.136 [2024-05-14 22:58:51.275543] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.136 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.136 [2024-05-14 22:58:51.292180] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.136 [2024-05-14 22:58:51.292217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.136 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.136 [2024-05-14 22:58:51.311103] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.136 [2024-05-14 22:58:51.311145] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.136 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.136 [2024-05-14 22:58:51.325926] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.136 [2024-05-14 22:58:51.325963] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.136 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.136 [2024-05-14 22:58:51.336339] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.136 [2024-05-14 22:58:51.336375] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.136 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.136 [2024-05-14 22:58:51.351148] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.136 [2024-05-14 22:58:51.351188] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.136 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.136 [2024-05-14 22:58:51.361455] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.136 [2024-05-14 22:58:51.361491] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.136 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.136 [2024-05-14 22:58:51.376408] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.136 [2024-05-14 22:58:51.376443] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.136 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.136 [2024-05-14 22:58:51.393206] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.136 [2024-05-14 22:58:51.393264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.136 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.136 [2024-05-14 22:58:51.408972] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.136 [2024-05-14 22:58:51.409018] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.136 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.137 [2024-05-14 22:58:51.424518] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.137 [2024-05-14 22:58:51.424564] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.137 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.137 [2024-05-14 22:58:51.440267] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.137 [2024-05-14 22:58:51.440304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.137 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.137 [2024-05-14 22:58:51.456709] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.137 [2024-05-14 22:58:51.456752] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.137 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.137 [2024-05-14 22:58:51.473881] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.137 [2024-05-14 22:58:51.473917] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.137 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.137 [2024-05-14 22:58:51.489086] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.137 [2024-05-14 22:58:51.489123] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.137 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.137 [2024-05-14 22:58:51.504662] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.137 [2024-05-14 22:58:51.504699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.137 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.137 [2024-05-14 22:58:51.514404] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.137 [2024-05-14 22:58:51.514441] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.137 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.529811] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.529849] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.545332] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.545395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.561262] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.561314] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.578433] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.578490] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.594665] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.594727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.610203] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.610269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.626480] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.626547] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.642603] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.642669] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.653175] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.653239] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.668040] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.668106] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.683070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.683130] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.699541] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.699595] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.715127] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.715173] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.731005] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.731066] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.748959] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.749006] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.764350] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.764394] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.396 [2024-05-14 22:58:51.776948] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.396 [2024-05-14 22:58:51.777009] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.396 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.794776] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.794821] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.805389] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.805428] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.820384] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.820425] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.837622] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.837683] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.853339] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.853379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.863537] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.863574] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.878401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.878439] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.894267] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.894304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.905063] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.905121] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.920038] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.920084] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.937144] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.937185] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.953369] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.953406] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.969326] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.969364] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:51.984878] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:51.984916] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:52.000163] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:52.000200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:52.016183] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:52.016220] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:52.033993] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.721 [2024-05-14 22:58:52.034064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.721 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.721 [2024-05-14 22:58:52.049897] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.722 [2024-05-14 22:58:52.049947] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.722 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.722 [2024-05-14 22:58:52.059964] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.722 [2024-05-14 22:58:52.060005] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.722 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.722 [2024-05-14 22:58:52.074530] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.722 [2024-05-14 22:58:52.074580] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.722 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.722 [2024-05-14 22:58:52.091313] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.722 [2024-05-14 22:58:52.091377] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.107974] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.108035] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.123644] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.123698] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.134494] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.134552] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.149515] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.149561] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.167509] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.167560] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.182600] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.182655] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.199616] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.199693] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.216161] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.216217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.231466] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.231505] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.247555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.247596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.264573] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.264612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.281215] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.281253] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.296608] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.296646] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.312475] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.312512] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.322902] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.322938] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.337979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.338014] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:39.980 [2024-05-14 22:58:52.354812] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.980 [2024-05-14 22:58:52.354848] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.980 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.370979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.371015] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.388437] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.388474] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.403807] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.403844] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.413840] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.413875] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.429095] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.429132] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.439106] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.439164] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.455010] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.455049] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.472208] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.472243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.487922] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.487957] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.498104] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.498137] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.514151] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.514189] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.530057] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.530108] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.547378] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.547414] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.238 [2024-05-14 22:58:52.563633] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.238 [2024-05-14 22:58:52.563670] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.238 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.239 [2024-05-14 22:58:52.579101] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.239 [2024-05-14 22:58:52.579138] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.239 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.239 [2024-05-14 22:58:52.589777] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.239 [2024-05-14 22:58:52.589810] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.239 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.239 [2024-05-14 22:58:52.600348] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.239 [2024-05-14 22:58:52.600386] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.239 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.239 [2024-05-14 22:58:52.611265] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.239 [2024-05-14 22:58:52.611322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.239 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.239 [2024-05-14 22:58:52.626542] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.239 [2024-05-14 22:58:52.626584] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.497 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.497 [2024-05-14 22:58:52.641831] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.497 [2024-05-14 22:58:52.641866] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.497 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.497 [2024-05-14 22:58:52.657217] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.657258] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.667701] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.667740] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.682184] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.682224] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.697729] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.697792] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.708148] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.708185] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.722859] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.722897] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.738628] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.738680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.749013] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.749052] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.763280] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.763319] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.778580] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.778618] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.795443] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.795481] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.810953] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.810991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.823319] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.823361] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.841454] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.841505] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.856566] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.856611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.868889] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.868926] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.498 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.498 [2024-05-14 22:58:52.885660] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.498 [2024-05-14 22:58:52.885703] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:52.902006] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:52.902050] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:52.917416] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:52.917458] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:52.933148] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:52.933190] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:52.949234] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:52.949279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:52.959903] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:52.959941] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:52.974801] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:52.974845] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:52.985532] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:52.985575] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:53.000558] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:53.000605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:53.016306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:53.016348] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:53.031984] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:53.032025] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:53.048994] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:53.049037] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:53.064738] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:53.064789] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:53.074732] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:53.074783] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:53.090104] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:53.090147] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:53.107774] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:53.107812] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:53.122844] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:53.122882] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:40.757 [2024-05-14 22:58:53.138661] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.757 [2024-05-14 22:58:53.138700] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.757 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.150630] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.150669] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.169041] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.169081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.183341] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.183379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.198832] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.198872] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.215579] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.215619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.231989] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.232042] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.249359] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.249398] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.265437] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.265477] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.281827] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.281867] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.298825] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.298865] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.315093] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.315133] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.332163] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.332234] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.348487] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.348563] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.364634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.364704] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.375271] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.375333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.389451] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.389519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.017 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.017 [2024-05-14 22:58:53.405366] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.017 [2024-05-14 22:58:53.405422] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.421279] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.421345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.437110] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.437187] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.453387] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.453451] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.469307] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.469367] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.485436] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.485485] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.502444] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.502484] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.518306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.518375] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.534588] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.534646] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.544261] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.544298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.559777] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.559834] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.577067] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.577115] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.593047] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.593088] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.609850] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.609889] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.620299] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.620336] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.631304] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.631341] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.642204] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.642243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.657245] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.657305] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.295 [2024-05-14 22:58:53.674080] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.295 [2024-05-14 22:58:53.674125] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.295 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.554 [2024-05-14 22:58:53.689571] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.554 [2024-05-14 22:58:53.689612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.554 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.554 [2024-05-14 22:58:53.700225] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.554 [2024-05-14 22:58:53.700264] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.554 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.554 [2024-05-14 22:58:53.714787] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.554 [2024-05-14 22:58:53.714824] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.554 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.554 [2024-05-14 22:58:53.730453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.554 [2024-05-14 22:58:53.730494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.554 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.554 [2024-05-14 22:58:53.740832] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.554 [2024-05-14 22:58:53.740873] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.554 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.554 [2024-05-14 22:58:53.755560] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.554 [2024-05-14 22:58:53.755605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.554 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.554 [2024-05-14 22:58:53.766248] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.554 [2024-05-14 22:58:53.766287] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.554 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.554 [2024-05-14 22:58:53.780884] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.554 [2024-05-14 22:58:53.780933] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.791575] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.791637] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.806071] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.806137] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.821986] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.822029] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.837646] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.837685] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.853753] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.853821] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.871232] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.871273] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.887783] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.887819] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.903527] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.903583] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.913630] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.913683] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.928169] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.928224] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.555 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.555 [2024-05-14 22:58:53.940792] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.555 [2024-05-14 22:58:53.940847] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:53.958840] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:53.958893] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:53.974057] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:53.974114] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:53.984805] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:53.984863] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:53.999891] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:53.999948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.018535] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.018595] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.034013] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.034072] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.050111] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.050161] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.062202] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.062243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.079181] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.079219] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.094261] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.094299] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.104301] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.104339] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.118840] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.118884] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.134567] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.134605] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.152095] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.152137] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.167313] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.167354] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.177969] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.178011] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:41.814 [2024-05-14 22:58:54.192811] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.814 [2024-05-14 22:58:54.192849] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.814 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.208155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.208197] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.224480] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.224527] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.239990] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.240033] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.255593] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.255631] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.267819] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.267859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.283858] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.283897] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.300417] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.300455] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.317346] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.317388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.327831] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.327870] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.338975] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.339011] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.354723] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.354773] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.370161] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.370200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.380398] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.380434] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.073 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.073 [2024-05-14 22:58:54.394555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.073 [2024-05-14 22:58:54.394593] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.074 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.074 [2024-05-14 22:58:54.404957] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.074 [2024-05-14 22:58:54.404997] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.074 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.074 [2024-05-14 22:58:54.415660] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.074 [2024-05-14 22:58:54.415699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.074 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.074 [2024-05-14 22:58:54.430571] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.074 [2024-05-14 22:58:54.430612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.074 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.074 [2024-05-14 22:58:54.448837] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.074 [2024-05-14 22:58:54.448878] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.074 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.464035] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.464072] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.474689] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.474740] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.489425] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.489476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.508498] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.508543] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.522840] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.522877] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.538115] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.538152] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.548727] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.548775] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.563594] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.563633] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.574376] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.574415] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.589603] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.589657] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.606804] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.606843] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.622659] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.622695] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.633312] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.633356] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.648314] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.648354] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.665086] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.665138] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.681057] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.681107] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.698246] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.698291] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.333 [2024-05-14 22:58:54.714010] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.333 [2024-05-14 22:58:54.714052] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.333 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.592 [2024-05-14 22:58:54.724936] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.592 [2024-05-14 22:58:54.724975] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.592 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.592 [2024-05-14 22:58:54.740083] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.592 [2024-05-14 22:58:54.740126] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.592 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.592 [2024-05-14 22:58:54.756646] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.592 [2024-05-14 22:58:54.756688] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.592 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.592 [2024-05-14 22:58:54.773723] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.592 [2024-05-14 22:58:54.773774] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.592 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.592 [2024-05-14 22:58:54.789691] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.592 [2024-05-14 22:58:54.789732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.592 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.592 [2024-05-14 22:58:54.807078] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.592 [2024-05-14 22:58:54.807116] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.592 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.592 [2024-05-14 22:58:54.822574] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.592 [2024-05-14 22:58:54.822613] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.592 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.592 [2024-05-14 22:58:54.833355] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.592 [2024-05-14 22:58:54.833395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.592 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.592 [2024-05-14 22:58:54.848556] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.592 [2024-05-14 22:58:54.848597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.593 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.593 [2024-05-14 22:58:54.864099] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.593 [2024-05-14 22:58:54.864154] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.593 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.593 [2024-05-14 22:58:54.882942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.593 [2024-05-14 22:58:54.882996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.593 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.593 [2024-05-14 22:58:54.898796] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.593 [2024-05-14 22:58:54.898835] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.593 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.593 [2024-05-14 22:58:54.915900] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.593 [2024-05-14 22:58:54.915958] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.593 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.593 [2024-05-14 22:58:54.927476] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.593 [2024-05-14 22:58:54.927516] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.593 00:11:42.593 Latency(us) 00:11:42.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.593 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:42.593 Nvme1n1 : 5.01 11497.75 89.83 0.00 0.00 11118.94 4885.41 21924.77 00:11:42.593 =================================================================================================================== 00:11:42.593 Total : 11497.75 89.83 0.00 0.00 11118.94 4885.41 21924.77 00:11:42.593 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.593 [2024-05-14 22:58:54.939457] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.593 [2024-05-14 22:58:54.939510] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.593 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.593 [2024-05-14 22:58:54.951484] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.593 [2024-05-14 22:58:54.951541] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.593 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.593 [2024-05-14 22:58:54.963477] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.593 [2024-05-14 22:58:54.963525] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.593 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.593 [2024-05-14 22:58:54.975487] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.593 [2024-05-14 22:58:54.975534] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.593 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:54.987497] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:54.987546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:54.999501] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:54.999553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:55.011489] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:55.011534] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:55.023496] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:55.023544] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:55.035497] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:55.035541] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:55.047502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:55.047548] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:55.059500] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:55.059545] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:55.071478] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:55.071510] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:55.083515] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:55.083559] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:55.095506] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:55.095545] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 [2024-05-14 22:58:55.107492] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.852 [2024-05-14 22:58:55.107526] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.852 2024/05/14 22:58:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.852 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75338) - No such process 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 75338 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.852 delay0 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.852 22:58:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:43.110 [2024-05-14 22:58:55.311215] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:49.676 Initializing NVMe Controllers 00:11:49.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:49.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:49.676 Initialization complete. Launching workers. 00:11:49.676 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 100 00:11:49.676 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 387, failed to submit 33 00:11:49.676 success 204, unsuccess 183, failed 0 00:11:49.676 22:59:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:49.676 22:59:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:49.676 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.676 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:49.676 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.676 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:49.676 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.676 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.676 rmmod nvme_tcp 00:11:49.676 rmmod nvme_fabrics 00:11:49.676 rmmod nvme_keyring 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 75163 ']' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 75163 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 75163 ']' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 75163 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75163 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:49.677 killing process with pid 75163 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75163' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 75163 00:11:49.677 [2024-05-14 22:59:01.480656] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 75163 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:49.677 00:11:49.677 real 0m24.443s 00:11:49.677 user 0m40.071s 00:11:49.677 sys 0m6.234s 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:49.677 ************************************ 00:11:49.677 END TEST nvmf_zcopy 00:11:49.677 ************************************ 00:11:49.677 22:59:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.677 22:59:01 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:49.677 22:59:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:49.677 22:59:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:49.677 22:59:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:49.677 ************************************ 00:11:49.677 START TEST nvmf_nmic 00:11:49.677 ************************************ 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:49.677 * Looking for test storage... 00:11:49.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:49.677 Cannot find device "nvmf_tgt_br" 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.677 Cannot find device "nvmf_tgt_br2" 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:49.677 Cannot find device "nvmf_tgt_br" 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:49.677 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:49.677 Cannot find device "nvmf_tgt_br2" 00:11:49.678 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:49.678 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:49.678 22:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:49.678 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.678 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:49.678 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.678 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:49.678 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:49.678 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:49.678 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:49.678 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:49.678 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:49.678 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:49.936 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:49.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:11:49.936 00:11:49.936 --- 10.0.0.2 ping statistics --- 00:11:49.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.937 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:49.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:49.937 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:11:49.937 00:11:49.937 --- 10.0.0.3 ping statistics --- 00:11:49.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.937 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:49.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:49.937 00:11:49.937 --- 10.0.0.1 ping statistics --- 00:11:49.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.937 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=75653 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 75653 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 75653 ']' 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:49.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:49.937 22:59:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.937 [2024-05-14 22:59:02.256393] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:49.937 [2024-05-14 22:59:02.256487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.195 [2024-05-14 22:59:02.391964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.195 [2024-05-14 22:59:02.462922] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.195 [2024-05-14 22:59:02.462998] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.195 [2024-05-14 22:59:02.463011] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.195 [2024-05-14 22:59:02.463021] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.195 [2024-05-14 22:59:02.463030] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.195 [2024-05-14 22:59:02.463152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.195 [2024-05-14 22:59:02.463862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.195 [2024-05-14 22:59:02.463932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.195 [2024-05-14 22:59:02.463939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.130 [2024-05-14 22:59:03.307707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.130 Malloc0 00:11:51.130 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.131 [2024-05-14 22:59:03.378162] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:51.131 [2024-05-14 22:59:03.378445] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.131 test case1: single bdev can't be used in multiple subsystems 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.131 [2024-05-14 22:59:03.402287] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:51.131 [2024-05-14 22:59:03.402351] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:51.131 [2024-05-14 22:59:03.402374] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 2024/05/14 22:59:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:51.131 request: 00:11:51.131 { 00:11:51.131 "method": "nvmf_subsystem_add_ns", 00:11:51.131 "params": { 00:11:51.131 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:51.131 "namespace": { 00:11:51.131 "bdev_name": "Malloc0", 00:11:51.131 "no_auto_visible": false 00:11:51.131 } 00:11:51.131 } 00:11:51.131 } 00:11:51.131 Got JSON-RPC error response 00:11:51.131 GoRPCClient: error on JSON-RPC call 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:51.131 Adding namespace failed - expected result. 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:51.131 test case2: host connect to nvmf target in multiple paths 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:51.131 [2024-05-14 22:59:03.414405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.131 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.389 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:51.389 22:59:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.389 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:11:51.389 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.389 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:51.389 22:59:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:11:53.918 22:59:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:53.918 22:59:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:53.918 22:59:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.919 22:59:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:53.919 22:59:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.919 22:59:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:11:53.919 22:59:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:53.919 [global] 00:11:53.919 thread=1 00:11:53.919 invalidate=1 00:11:53.919 rw=write 00:11:53.919 time_based=1 00:11:53.919 runtime=1 00:11:53.919 ioengine=libaio 00:11:53.919 direct=1 00:11:53.919 bs=4096 00:11:53.919 iodepth=1 00:11:53.919 norandommap=0 00:11:53.919 numjobs=1 00:11:53.919 00:11:53.919 verify_dump=1 00:11:53.919 verify_backlog=512 00:11:53.919 verify_state_save=0 00:11:53.919 do_verify=1 00:11:53.919 verify=crc32c-intel 00:11:53.919 [job0] 00:11:53.919 filename=/dev/nvme0n1 00:11:53.919 Could not set queue depth (nvme0n1) 00:11:53.919 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:53.919 fio-3.35 00:11:53.919 Starting 1 thread 00:11:54.854 00:11:54.854 job0: (groupid=0, jobs=1): err= 0: pid=75767: Tue May 14 22:59:07 2024 00:11:54.854 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:54.854 slat (nsec): min=13228, max=45913, avg=16464.33, stdev=3521.14 00:11:54.854 clat (usec): min=136, max=226, avg=153.00, stdev= 9.66 00:11:54.854 lat (usec): min=149, max=241, avg=169.47, stdev=10.89 00:11:54.854 clat percentiles (usec): 00:11:54.854 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 143], 20.00th=[ 145], 00:11:54.854 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 00:11:54.854 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 172], 00:11:54.854 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 206], 99.95th=[ 212], 00:11:54.854 | 99.99th=[ 227] 00:11:54.854 write: IOPS=3553, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1001msec); 0 zone resets 00:11:54.854 slat (nsec): min=20087, max=95323, avg=23574.58, stdev=4739.61 00:11:54.854 clat (usec): min=92, max=217, avg=107.76, stdev= 8.13 00:11:54.854 lat (usec): min=113, max=281, avg=131.34, stdev=10.33 00:11:54.854 clat percentiles (usec): 00:11:54.854 | 1.00th=[ 97], 5.00th=[ 99], 10.00th=[ 100], 20.00th=[ 102], 00:11:54.854 | 30.00th=[ 103], 40.00th=[ 105], 50.00th=[ 106], 60.00th=[ 109], 00:11:54.854 | 70.00th=[ 111], 80.00th=[ 113], 90.00th=[ 119], 95.00th=[ 124], 00:11:54.854 | 99.00th=[ 135], 99.50th=[ 141], 99.90th=[ 163], 99.95th=[ 188], 00:11:54.854 | 99.99th=[ 219] 00:11:54.854 bw ( KiB/s): min=14184, max=14184, per=99.79%, avg=14184.00, stdev= 0.00, samples=1 00:11:54.854 iops : min= 3546, max= 3546, avg=3546.00, stdev= 0.00, samples=1 00:11:54.854 lat (usec) : 100=6.20%, 250=93.80% 00:11:54.854 cpu : usr=2.40%, sys=10.20%, ctx=6631, majf=0, minf=2 00:11:54.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.854 issued rwts: total=3072,3557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.854 00:11:54.854 Run status group 0 (all jobs): 00:11:54.854 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:54.854 WRITE: bw=13.9MiB/s (14.6MB/s), 13.9MiB/s-13.9MiB/s (14.6MB/s-14.6MB/s), io=13.9MiB (14.6MB), run=1001-1001msec 00:11:54.854 00:11:54.854 Disk stats (read/write): 00:11:54.854 nvme0n1: ios=2898/3072, merge=0/0, ticks=474/361, in_queue=835, util=91.68% 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.854 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.854 rmmod nvme_tcp 00:11:54.855 rmmod nvme_fabrics 00:11:54.855 rmmod nvme_keyring 00:11:54.855 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.855 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:54.855 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:54.855 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 75653 ']' 00:11:54.855 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 75653 00:11:54.855 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 75653 ']' 00:11:54.855 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 75653 00:11:54.855 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:11:54.855 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:54.855 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75653 00:11:55.112 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:55.112 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:55.113 killing process with pid 75653 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75653' 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 75653 00:11:55.113 [2024-05-14 22:59:07.255302] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 75653 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:55.113 00:11:55.113 real 0m5.717s 00:11:55.113 user 0m19.412s 00:11:55.113 sys 0m1.318s 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.113 22:59:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:55.113 ************************************ 00:11:55.113 END TEST nvmf_nmic 00:11:55.113 ************************************ 00:11:55.371 22:59:07 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:55.371 22:59:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:55.371 22:59:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.371 22:59:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:55.371 ************************************ 00:11:55.371 START TEST nvmf_fio_target 00:11:55.371 ************************************ 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:55.371 * Looking for test storage... 00:11:55.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.371 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:55.372 Cannot find device "nvmf_tgt_br" 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:55.372 Cannot find device "nvmf_tgt_br2" 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:55.372 Cannot find device "nvmf_tgt_br" 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:55.372 Cannot find device "nvmf_tgt_br2" 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:55.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:55.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:55.372 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:55.630 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:55.630 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.630 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.630 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:55.630 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:55.630 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:55.630 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:55.630 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:55.630 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:55.630 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:55.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:11:55.631 00:11:55.631 --- 10.0.0.2 ping statistics --- 00:11:55.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.631 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:55.631 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.631 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:11:55.631 00:11:55.631 --- 10.0.0.3 ping statistics --- 00:11:55.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.631 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:11:55.631 00:11:55.631 --- 10.0.0.1 ping statistics --- 00:11:55.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.631 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=75941 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 75941 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 75941 ']' 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:55.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:55.631 22:59:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.631 [2024-05-14 22:59:08.013990] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:55.631 [2024-05-14 22:59:08.014080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.889 [2024-05-14 22:59:08.177047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.889 [2024-05-14 22:59:08.260900] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.889 [2024-05-14 22:59:08.260953] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.889 [2024-05-14 22:59:08.260965] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.889 [2024-05-14 22:59:08.260974] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.889 [2024-05-14 22:59:08.260982] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.889 [2024-05-14 22:59:08.261148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.889 [2024-05-14 22:59:08.261226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.889 [2024-05-14 22:59:08.261359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.889 [2024-05-14 22:59:08.261362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.825 22:59:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:56.825 22:59:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:11:56.825 22:59:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:56.825 22:59:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.825 22:59:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.825 22:59:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.825 22:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:57.082 [2024-05-14 22:59:09.259063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.082 22:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.341 22:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:57.341 22:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.599 22:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:57.599 22:59:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.856 22:59:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:57.856 22:59:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.119 22:59:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:58.119 22:59:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:58.378 22:59:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.945 22:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:58.945 22:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.945 22:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:58.945 22:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.512 22:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:59.512 22:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:59.769 22:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.027 22:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:00.027 22:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:00.286 22:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:00.286 22:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.545 22:59:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.803 [2024-05-14 22:59:13.076565] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:00.803 [2024-05-14 22:59:13.076978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.803 22:59:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:01.062 22:59:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:01.319 22:59:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.577 22:59:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:01.577 22:59:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:12:01.577 22:59:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.577 22:59:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:12:01.577 22:59:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:12:01.577 22:59:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:12:04.107 22:59:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:04.107 22:59:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.107 22:59:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:04.107 22:59:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:12:04.107 22:59:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.107 22:59:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:12:04.107 22:59:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:04.107 [global] 00:12:04.107 thread=1 00:12:04.107 invalidate=1 00:12:04.107 rw=write 00:12:04.107 time_based=1 00:12:04.107 runtime=1 00:12:04.107 ioengine=libaio 00:12:04.107 direct=1 00:12:04.107 bs=4096 00:12:04.107 iodepth=1 00:12:04.107 norandommap=0 00:12:04.107 numjobs=1 00:12:04.107 00:12:04.107 verify_dump=1 00:12:04.107 verify_backlog=512 00:12:04.107 verify_state_save=0 00:12:04.107 do_verify=1 00:12:04.107 verify=crc32c-intel 00:12:04.107 [job0] 00:12:04.107 filename=/dev/nvme0n1 00:12:04.107 [job1] 00:12:04.107 filename=/dev/nvme0n2 00:12:04.107 [job2] 00:12:04.107 filename=/dev/nvme0n3 00:12:04.107 [job3] 00:12:04.107 filename=/dev/nvme0n4 00:12:04.107 Could not set queue depth (nvme0n1) 00:12:04.107 Could not set queue depth (nvme0n2) 00:12:04.107 Could not set queue depth (nvme0n3) 00:12:04.107 Could not set queue depth (nvme0n4) 00:12:04.107 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.107 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.107 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.107 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.107 fio-3.35 00:12:04.107 Starting 4 threads 00:12:05.043 00:12:05.043 job0: (groupid=0, jobs=1): err= 0: pid=76244: Tue May 14 22:59:17 2024 00:12:05.043 read: IOPS=2816, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:12:05.043 slat (nsec): min=13570, max=41385, avg=18113.68, stdev=2908.78 00:12:05.043 clat (usec): min=137, max=958, avg=167.97, stdev=25.82 00:12:05.043 lat (usec): min=154, max=975, avg=186.09, stdev=25.69 00:12:05.043 clat percentiles (usec): 00:12:05.043 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:12:05.043 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:12:05.043 | 70.00th=[ 169], 80.00th=[ 182], 90.00th=[ 202], 95.00th=[ 212], 00:12:05.043 | 99.00th=[ 231], 99.50th=[ 235], 99.90th=[ 265], 99.95th=[ 379], 00:12:05.043 | 99.99th=[ 963] 00:12:05.043 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:05.043 slat (usec): min=20, max=114, avg=26.72, stdev= 4.95 00:12:05.043 clat (usec): min=99, max=449, avg=124.00, stdev=14.87 00:12:05.043 lat (usec): min=122, max=471, avg=150.73, stdev=15.61 00:12:05.043 clat percentiles (usec): 00:12:05.043 | 1.00th=[ 104], 5.00th=[ 109], 10.00th=[ 111], 20.00th=[ 114], 00:12:05.043 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 125], 00:12:05.043 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 151], 00:12:05.043 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 233], 99.95th=[ 273], 00:12:05.043 | 99.99th=[ 449] 00:12:05.043 bw ( KiB/s): min=12288, max=12288, per=40.68%, avg=12288.00, stdev= 0.00, samples=1 00:12:05.043 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:05.043 lat (usec) : 100=0.03%, 250=99.83%, 500=0.12%, 1000=0.02% 00:12:05.043 cpu : usr=2.70%, sys=9.90%, ctx=5893, majf=0, minf=8 00:12:05.043 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.044 issued rwts: total=2819,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.044 job1: (groupid=0, jobs=1): err= 0: pid=76245: Tue May 14 22:59:17 2024 00:12:05.044 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:05.044 slat (usec): min=11, max=814, avg=22.67, stdev=25.91 00:12:05.044 clat (usec): min=203, max=714, avg=462.59, stdev=74.09 00:12:05.044 lat (usec): min=296, max=1018, avg=485.26, stdev=76.25 00:12:05.044 clat percentiles (usec): 00:12:05.044 | 1.00th=[ 297], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 400], 00:12:05.044 | 30.00th=[ 408], 40.00th=[ 420], 50.00th=[ 445], 60.00th=[ 490], 00:12:05.044 | 70.00th=[ 510], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 586], 00:12:05.044 | 99.00th=[ 619], 99.50th=[ 660], 99.90th=[ 685], 99.95th=[ 717], 00:12:05.044 | 99.99th=[ 717] 00:12:05.044 write: IOPS=1473, BW=5894KiB/s (6036kB/s)(5900KiB/1001msec); 0 zone resets 00:12:05.044 slat (usec): min=19, max=118, avg=37.11, stdev=10.94 00:12:05.044 clat (usec): min=149, max=527, avg=299.27, stdev=35.84 00:12:05.044 lat (usec): min=191, max=556, avg=336.38, stdev=33.96 00:12:05.044 clat percentiles (usec): 00:12:05.044 | 1.00th=[ 188], 5.00th=[ 237], 10.00th=[ 258], 20.00th=[ 273], 00:12:05.044 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:12:05.044 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 351], 00:12:05.044 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 519], 99.95th=[ 529], 00:12:05.044 | 99.99th=[ 529] 00:12:05.044 bw ( KiB/s): min= 6384, max= 6384, per=21.14%, avg=6384.00, stdev= 0.00, samples=1 00:12:05.044 iops : min= 1596, max= 1596, avg=1596.00, stdev= 0.00, samples=1 00:12:05.044 lat (usec) : 250=5.08%, 500=80.51%, 750=14.41% 00:12:05.044 cpu : usr=1.90%, sys=5.70%, ctx=2505, majf=0, minf=9 00:12:05.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.044 issued rwts: total=1024,1475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.044 job2: (groupid=0, jobs=1): err= 0: pid=76246: Tue May 14 22:59:17 2024 00:12:05.044 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:05.044 slat (nsec): min=12258, max=67920, avg=22512.87, stdev=7201.29 00:12:05.044 clat (usec): min=179, max=1092, avg=462.44, stdev=73.71 00:12:05.044 lat (usec): min=202, max=1115, avg=484.95, stdev=74.48 00:12:05.044 clat percentiles (usec): 00:12:05.044 | 1.00th=[ 310], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[ 400], 00:12:05.044 | 30.00th=[ 412], 40.00th=[ 420], 50.00th=[ 449], 60.00th=[ 486], 00:12:05.044 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 578], 00:12:05.044 | 99.00th=[ 627], 99.50th=[ 635], 99.90th=[ 701], 99.95th=[ 1090], 00:12:05.044 | 99.99th=[ 1090] 00:12:05.044 write: IOPS=1474, BW=5898KiB/s (6040kB/s)(5904KiB/1001msec); 0 zone resets 00:12:05.044 slat (usec): min=20, max=112, avg=35.65, stdev=10.24 00:12:05.044 clat (usec): min=151, max=527, avg=300.70, stdev=35.74 00:12:05.044 lat (usec): min=218, max=559, avg=336.35, stdev=33.28 00:12:05.044 clat percentiles (usec): 00:12:05.044 | 1.00th=[ 202], 5.00th=[ 233], 10.00th=[ 260], 20.00th=[ 277], 00:12:05.044 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 310], 00:12:05.044 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 355], 00:12:05.044 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 498], 99.95th=[ 529], 00:12:05.044 | 99.99th=[ 529] 00:12:05.044 bw ( KiB/s): min= 6384, max= 6384, per=21.14%, avg=6384.00, stdev= 0.00, samples=1 00:12:05.044 iops : min= 1596, max= 1596, avg=1596.00, stdev= 0.00, samples=1 00:12:05.044 lat (usec) : 250=4.56%, 500=81.52%, 750=13.88% 00:12:05.044 lat (msec) : 2=0.04% 00:12:05.044 cpu : usr=1.10%, sys=6.40%, ctx=2500, majf=0, minf=11 00:12:05.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.044 issued rwts: total=1024,1476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.044 job3: (groupid=0, jobs=1): err= 0: pid=76247: Tue May 14 22:59:17 2024 00:12:05.044 read: IOPS=1316, BW=5267KiB/s (5393kB/s)(5272KiB/1001msec) 00:12:05.044 slat (usec): min=17, max=113, avg=31.64, stdev= 8.27 00:12:05.044 clat (usec): min=163, max=2951, avg=355.78, stdev=101.18 00:12:05.044 lat (usec): min=186, max=3001, avg=387.42, stdev=104.13 00:12:05.044 clat percentiles (usec): 00:12:05.044 | 1.00th=[ 188], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 277], 00:12:05.044 | 30.00th=[ 314], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 379], 00:12:05.044 | 70.00th=[ 392], 80.00th=[ 408], 90.00th=[ 449], 95.00th=[ 469], 00:12:05.044 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 750], 99.95th=[ 2966], 00:12:05.044 | 99.99th=[ 2966] 00:12:05.044 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:05.044 slat (usec): min=25, max=120, avg=44.65, stdev=12.49 00:12:05.044 clat (usec): min=118, max=2475, avg=267.42, stdev=71.48 00:12:05.044 lat (usec): min=168, max=2527, avg=312.07, stdev=72.84 00:12:05.044 clat percentiles (usec): 00:12:05.044 | 1.00th=[ 149], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 225], 00:12:05.044 | 30.00th=[ 241], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 285], 00:12:05.044 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:12:05.044 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 668], 99.95th=[ 2474], 00:12:05.044 | 99.99th=[ 2474] 00:12:05.044 bw ( KiB/s): min= 7024, max= 7024, per=23.25%, avg=7024.00, stdev= 0.00, samples=1 00:12:05.044 iops : min= 1756, max= 1756, avg=1756.00, stdev= 0.00, samples=1 00:12:05.044 lat (usec) : 250=20.85%, 500=78.66%, 750=0.39%, 1000=0.04% 00:12:05.044 lat (msec) : 4=0.07% 00:12:05.044 cpu : usr=2.40%, sys=8.10%, ctx=2858, majf=0, minf=7 00:12:05.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:05.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:05.044 issued rwts: total=1318,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:05.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:05.044 00:12:05.044 Run status group 0 (all jobs): 00:12:05.044 READ: bw=24.1MiB/s (25.3MB/s), 4092KiB/s-11.0MiB/s (4190kB/s-11.5MB/s), io=24.2MiB (25.3MB), run=1001-1001msec 00:12:05.044 WRITE: bw=29.5MiB/s (30.9MB/s), 5894KiB/s-12.0MiB/s (6036kB/s-12.6MB/s), io=29.5MiB (31.0MB), run=1001-1001msec 00:12:05.044 00:12:05.044 Disk stats (read/write): 00:12:05.044 nvme0n1: ios=2559/2560, merge=0/0, ticks=461/345, in_queue=806, util=88.98% 00:12:05.044 nvme0n2: ios=1073/1093, merge=0/0, ticks=500/346, in_queue=846, util=89.39% 00:12:05.044 nvme0n3: ios=1060/1094, merge=0/0, ticks=498/341, in_queue=839, util=90.75% 00:12:05.044 nvme0n4: ios=1051/1535, merge=0/0, ticks=390/432, in_queue=822, util=90.60% 00:12:05.044 22:59:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:05.044 [global] 00:12:05.044 thread=1 00:12:05.044 invalidate=1 00:12:05.044 rw=randwrite 00:12:05.044 time_based=1 00:12:05.044 runtime=1 00:12:05.044 ioengine=libaio 00:12:05.044 direct=1 00:12:05.044 bs=4096 00:12:05.044 iodepth=1 00:12:05.044 norandommap=0 00:12:05.044 numjobs=1 00:12:05.044 00:12:05.044 verify_dump=1 00:12:05.044 verify_backlog=512 00:12:05.044 verify_state_save=0 00:12:05.044 do_verify=1 00:12:05.044 verify=crc32c-intel 00:12:05.044 [job0] 00:12:05.044 filename=/dev/nvme0n1 00:12:05.044 [job1] 00:12:05.044 filename=/dev/nvme0n2 00:12:05.044 [job2] 00:12:05.044 filename=/dev/nvme0n3 00:12:05.044 [job3] 00:12:05.044 filename=/dev/nvme0n4 00:12:05.044 Could not set queue depth (nvme0n1) 00:12:05.044 Could not set queue depth (nvme0n2) 00:12:05.044 Could not set queue depth (nvme0n3) 00:12:05.044 Could not set queue depth (nvme0n4) 00:12:05.044 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.044 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.044 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.044 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.044 fio-3.35 00:12:05.044 Starting 4 threads 00:12:06.418 00:12:06.418 job0: (groupid=0, jobs=1): err= 0: pid=76301: Tue May 14 22:59:18 2024 00:12:06.418 read: IOPS=1525, BW=6102KiB/s (6248kB/s)(6108KiB/1001msec) 00:12:06.418 slat (usec): min=14, max=700, avg=28.30, stdev=18.86 00:12:06.418 clat (usec): min=183, max=2602, avg=341.92, stdev=78.58 00:12:06.418 lat (usec): min=212, max=2624, avg=370.22, stdev=79.76 00:12:06.418 clat percentiles (usec): 00:12:06.418 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 297], 00:12:06.418 | 30.00th=[ 310], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:12:06.418 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 433], 00:12:06.418 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 791], 99.95th=[ 2606], 00:12:06.418 | 99.99th=[ 2606] 00:12:06.418 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:06.418 slat (usec): min=23, max=166, avg=39.12, stdev=14.83 00:12:06.418 clat (usec): min=80, max=448, avg=237.66, stdev=38.34 00:12:06.418 lat (usec): min=138, max=501, avg=276.78, stdev=40.42 00:12:06.418 clat percentiles (usec): 00:12:06.418 | 1.00th=[ 135], 5.00th=[ 184], 10.00th=[ 198], 20.00th=[ 206], 00:12:06.418 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 237], 60.00th=[ 251], 00:12:06.418 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 297], 00:12:06.418 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 449], 00:12:06.418 | 99.99th=[ 449] 00:12:06.418 bw ( KiB/s): min= 8192, max= 8192, per=28.25%, avg=8192.00, stdev= 0.00, samples=1 00:12:06.418 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:06.418 lat (usec) : 100=0.03%, 250=30.23%, 500=68.85%, 750=0.82%, 1000=0.03% 00:12:06.418 lat (msec) : 4=0.03% 00:12:06.418 cpu : usr=2.40%, sys=7.30%, ctx=3087, majf=0, minf=9 00:12:06.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.418 issued rwts: total=1527,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.418 job1: (groupid=0, jobs=1): err= 0: pid=76302: Tue May 14 22:59:18 2024 00:12:06.418 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:06.418 slat (usec): min=9, max=100, avg=18.34, stdev= 7.48 00:12:06.418 clat (usec): min=203, max=721, avg=337.39, stdev=66.11 00:12:06.418 lat (usec): min=217, max=745, avg=355.73, stdev=69.41 00:12:06.418 clat percentiles (usec): 00:12:06.418 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 269], 00:12:06.418 | 30.00th=[ 281], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:12:06.418 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 396], 95.00th=[ 457], 00:12:06.418 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 676], 99.95th=[ 725], 00:12:06.418 | 99.99th=[ 725] 00:12:06.418 write: IOPS=1604, BW=6418KiB/s (6572kB/s)(6424KiB/1001msec); 0 zone resets 00:12:06.418 slat (usec): min=14, max=1155, avg=28.28, stdev=29.79 00:12:06.418 clat (usec): min=37, max=3134, avg=249.77, stdev=90.15 00:12:06.418 lat (usec): min=144, max=3161, avg=278.05, stdev=93.46 00:12:06.418 clat percentiles (usec): 00:12:06.418 | 1.00th=[ 159], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 212], 00:12:06.418 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 243], 60.00th=[ 262], 00:12:06.418 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 314], 00:12:06.418 | 99.00th=[ 375], 99.50th=[ 437], 99.90th=[ 1221], 99.95th=[ 3130], 00:12:06.418 | 99.99th=[ 3130] 00:12:06.418 bw ( KiB/s): min= 8192, max= 8192, per=28.25%, avg=8192.00, stdev= 0.00, samples=1 00:12:06.418 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:06.418 lat (usec) : 50=0.03%, 250=29.31%, 500=68.62%, 750=1.91%, 1000=0.06% 00:12:06.418 lat (msec) : 2=0.03%, 4=0.03% 00:12:06.418 cpu : usr=1.40%, sys=6.10%, ctx=3151, majf=0, minf=10 00:12:06.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.418 issued rwts: total=1536,1606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.418 job2: (groupid=0, jobs=1): err= 0: pid=76303: Tue May 14 22:59:18 2024 00:12:06.418 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:06.418 slat (nsec): min=11867, max=77275, avg=24300.12, stdev=7743.07 00:12:06.418 clat (usec): min=175, max=599, avg=338.29, stdev=51.35 00:12:06.418 lat (usec): min=193, max=617, avg=362.59, stdev=48.95 00:12:06.418 clat percentiles (usec): 00:12:06.418 | 1.00th=[ 227], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:12:06.418 | 30.00th=[ 306], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 355], 00:12:06.418 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 429], 00:12:06.418 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 545], 99.95th=[ 603], 00:12:06.418 | 99.99th=[ 603] 00:12:06.418 write: IOPS=1552, BW=6210KiB/s (6359kB/s)(6216KiB/1001msec); 0 zone resets 00:12:06.418 slat (usec): min=17, max=223, avg=36.28, stdev=11.58 00:12:06.418 clat (usec): min=113, max=1285, avg=243.20, stdev=55.77 00:12:06.418 lat (usec): min=142, max=1317, avg=279.48, stdev=53.68 00:12:06.418 clat percentiles (usec): 00:12:06.418 | 1.00th=[ 129], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 200], 00:12:06.418 | 30.00th=[ 206], 40.00th=[ 225], 50.00th=[ 247], 60.00th=[ 262], 00:12:06.418 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 314], 00:12:06.418 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 832], 99.95th=[ 1287], 00:12:06.418 | 99.99th=[ 1287] 00:12:06.418 bw ( KiB/s): min= 8192, max= 8192, per=28.25%, avg=8192.00, stdev= 0.00, samples=1 00:12:06.418 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:06.418 lat (usec) : 250=26.80%, 500=72.88%, 750=0.26%, 1000=0.03% 00:12:06.418 lat (msec) : 2=0.03% 00:12:06.418 cpu : usr=1.60%, sys=7.80%, ctx=3092, majf=0, minf=17 00:12:06.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.418 issued rwts: total=1536,1554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.418 job3: (groupid=0, jobs=1): err= 0: pid=76304: Tue May 14 22:59:18 2024 00:12:06.418 read: IOPS=2214, BW=8859KiB/s (9072kB/s)(8868KiB/1001msec) 00:12:06.418 slat (nsec): min=13545, max=80467, avg=19211.63, stdev=5661.75 00:12:06.418 clat (usec): min=149, max=705, avg=200.03, stdev=47.32 00:12:06.418 lat (usec): min=165, max=731, avg=219.24, stdev=46.84 00:12:06.418 clat percentiles (usec): 00:12:06.418 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:12:06.418 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 196], 00:12:06.418 | 70.00th=[ 210], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 277], 00:12:06.418 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[ 611], 99.95th=[ 685], 00:12:06.418 | 99.99th=[ 709] 00:12:06.418 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:06.418 slat (usec): min=12, max=144, avg=27.53, stdev= 7.35 00:12:06.418 clat (usec): min=70, max=7925, avg=169.31, stdev=282.81 00:12:06.418 lat (usec): min=132, max=7956, avg=196.85, stdev=283.15 00:12:06.418 clat percentiles (usec): 00:12:06.418 | 1.00th=[ 114], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 124], 00:12:06.418 | 30.00th=[ 127], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 147], 00:12:06.418 | 70.00th=[ 159], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 225], 00:12:06.418 | 99.00th=[ 343], 99.50th=[ 594], 99.90th=[ 7373], 99.95th=[ 7767], 00:12:06.418 | 99.99th=[ 7898] 00:12:06.418 bw ( KiB/s): min=10160, max=10160, per=35.04%, avg=10160.00, stdev= 0.00, samples=1 00:12:06.418 iops : min= 2540, max= 2540, avg=2540.00, stdev= 0.00, samples=1 00:12:06.418 lat (usec) : 100=0.02%, 250=89.70%, 500=9.82%, 750=0.25%, 1000=0.06% 00:12:06.418 lat (msec) : 2=0.02%, 4=0.06%, 10=0.06% 00:12:06.418 cpu : usr=2.00%, sys=8.60%, ctx=4782, majf=0, minf=9 00:12:06.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.419 issued rwts: total=2217,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.419 00:12:06.419 Run status group 0 (all jobs): 00:12:06.419 READ: bw=26.6MiB/s (27.9MB/s), 6102KiB/s-8859KiB/s (6248kB/s-9072kB/s), io=26.6MiB (27.9MB), run=1001-1001msec 00:12:06.419 WRITE: bw=28.3MiB/s (29.7MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=28.3MiB (29.7MB), run=1001-1001msec 00:12:06.419 00:12:06.419 Disk stats (read/write): 00:12:06.419 nvme0n1: ios=1235/1536, merge=0/0, ticks=423/389, in_queue=812, util=88.18% 00:12:06.419 nvme0n2: ios=1285/1536, merge=0/0, ticks=407/368, in_queue=775, util=88.68% 00:12:06.419 nvme0n3: ios=1223/1536, merge=0/0, ticks=421/397, in_queue=818, util=89.60% 00:12:06.419 nvme0n4: ios=2038/2048, merge=0/0, ticks=415/368, in_queue=783, util=88.38% 00:12:06.419 22:59:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:06.419 [global] 00:12:06.419 thread=1 00:12:06.419 invalidate=1 00:12:06.419 rw=write 00:12:06.419 time_based=1 00:12:06.419 runtime=1 00:12:06.419 ioengine=libaio 00:12:06.419 direct=1 00:12:06.419 bs=4096 00:12:06.419 iodepth=128 00:12:06.419 norandommap=0 00:12:06.419 numjobs=1 00:12:06.419 00:12:06.419 verify_dump=1 00:12:06.419 verify_backlog=512 00:12:06.419 verify_state_save=0 00:12:06.419 do_verify=1 00:12:06.419 verify=crc32c-intel 00:12:06.419 [job0] 00:12:06.419 filename=/dev/nvme0n1 00:12:06.419 [job1] 00:12:06.419 filename=/dev/nvme0n2 00:12:06.419 [job2] 00:12:06.419 filename=/dev/nvme0n3 00:12:06.419 [job3] 00:12:06.419 filename=/dev/nvme0n4 00:12:06.419 Could not set queue depth (nvme0n1) 00:12:06.419 Could not set queue depth (nvme0n2) 00:12:06.419 Could not set queue depth (nvme0n3) 00:12:06.419 Could not set queue depth (nvme0n4) 00:12:06.419 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.419 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.419 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.419 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.419 fio-3.35 00:12:06.419 Starting 4 threads 00:12:07.860 00:12:07.860 job0: (groupid=0, jobs=1): err= 0: pid=76358: Tue May 14 22:59:19 2024 00:12:07.860 read: IOPS=2670, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1004msec) 00:12:07.860 slat (usec): min=3, max=6744, avg=173.17, stdev=646.74 00:12:07.860 clat (usec): min=2995, max=29856, avg=21958.43, stdev=3269.85 00:12:07.860 lat (usec): min=3188, max=29871, avg=22131.60, stdev=3253.78 00:12:07.860 clat percentiles (usec): 00:12:07.860 | 1.00th=[ 8979], 5.00th=[18220], 10.00th=[19006], 20.00th=[20055], 00:12:07.860 | 30.00th=[21103], 40.00th=[21627], 50.00th=[22152], 60.00th=[22938], 00:12:07.860 | 70.00th=[23462], 80.00th=[24249], 90.00th=[25297], 95.00th=[25822], 00:12:07.860 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29754], 99.95th=[29754], 00:12:07.860 | 99.99th=[29754] 00:12:07.860 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:12:07.860 slat (usec): min=4, max=6090, avg=166.65, stdev=687.65 00:12:07.860 clat (usec): min=14316, max=29837, avg=22013.87, stdev=2589.99 00:12:07.860 lat (usec): min=15343, max=29863, avg=22180.52, stdev=2519.09 00:12:07.860 clat percentiles (usec): 00:12:07.860 | 1.00th=[15533], 5.00th=[17695], 10.00th=[19006], 20.00th=[19530], 00:12:07.860 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21890], 60.00th=[22938], 00:12:07.860 | 70.00th=[23462], 80.00th=[24249], 90.00th=[25035], 95.00th=[26084], 00:12:07.860 | 99.00th=[28443], 99.50th=[28705], 99.90th=[29754], 99.95th=[29754], 00:12:07.860 | 99.99th=[29754] 00:12:07.860 bw ( KiB/s): min=12240, max=12263, per=20.89%, avg=12251.50, stdev=16.26, samples=2 00:12:07.860 iops : min= 3060, max= 3065, avg=3062.50, stdev= 3.54, samples=2 00:12:07.860 lat (msec) : 4=0.09%, 10=0.90%, 20=20.88%, 50=78.13% 00:12:07.860 cpu : usr=3.19%, sys=7.88%, ctx=1007, majf=0, minf=15 00:12:07.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:07.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.860 issued rwts: total=2681,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.860 job1: (groupid=0, jobs=1): err= 0: pid=76359: Tue May 14 22:59:19 2024 00:12:07.860 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:12:07.860 slat (usec): min=2, max=4307, avg=113.63, stdev=447.97 00:12:07.860 clat (usec): min=8701, max=25105, avg=15144.08, stdev=4245.57 00:12:07.860 lat (usec): min=9413, max=25132, avg=15257.71, stdev=4266.05 00:12:07.860 clat percentiles (usec): 00:12:07.860 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[11076], 20.00th=[11469], 00:12:07.860 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[15008], 00:12:07.860 | 70.00th=[19006], 80.00th=[20317], 90.00th=[21365], 95.00th=[22152], 00:12:07.860 | 99.00th=[23462], 99.50th=[23462], 99.90th=[24249], 99.95th=[24249], 00:12:07.860 | 99.99th=[25035] 00:12:07.860 write: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1001msec); 0 zone resets 00:12:07.860 slat (usec): min=7, max=5121, avg=109.69, stdev=446.52 00:12:07.860 clat (usec): min=778, max=24739, avg=14163.19, stdev=4440.83 00:12:07.860 lat (usec): min=826, max=24757, avg=14272.88, stdev=4459.57 00:12:07.860 clat percentiles (usec): 00:12:07.860 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:12:07.860 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12125], 60.00th=[12780], 00:12:07.860 | 70.00th=[17171], 80.00th=[19792], 90.00th=[21103], 95.00th=[21627], 00:12:07.860 | 99.00th=[23200], 99.50th=[23725], 99.90th=[24249], 99.95th=[24511], 00:12:07.860 | 99.99th=[24773] 00:12:07.860 bw ( KiB/s): min=21760, max=21760, per=37.10%, avg=21760.00, stdev= 0.00, samples=1 00:12:07.860 iops : min= 5440, max= 5440, avg=5440.00, stdev= 0.00, samples=1 00:12:07.860 lat (usec) : 1000=0.06% 00:12:07.860 lat (msec) : 2=0.06%, 10=4.91%, 20=74.37%, 50=20.60% 00:12:07.861 cpu : usr=3.90%, sys=12.70%, ctx=864, majf=0, minf=9 00:12:07.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:07.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.861 issued rwts: total=4096,4553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.861 job2: (groupid=0, jobs=1): err= 0: pid=76360: Tue May 14 22:59:19 2024 00:12:07.861 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:12:07.861 slat (usec): min=3, max=12665, avg=139.00, stdev=687.55 00:12:07.861 clat (usec): min=6219, max=30855, avg=17801.30, stdev=4515.99 00:12:07.861 lat (usec): min=6233, max=30869, avg=17940.30, stdev=4547.68 00:12:07.861 clat percentiles (usec): 00:12:07.861 | 1.00th=[ 9765], 5.00th=[11469], 10.00th=[11863], 20.00th=[13173], 00:12:07.861 | 30.00th=[13960], 40.00th=[16581], 50.00th=[17695], 60.00th=[19792], 00:12:07.861 | 70.00th=[20841], 80.00th=[21627], 90.00th=[23462], 95.00th=[24773], 00:12:07.861 | 99.00th=[28705], 99.50th=[30278], 99.90th=[30802], 99.95th=[30802], 00:12:07.861 | 99.99th=[30802] 00:12:07.861 write: IOPS=4017, BW=15.7MiB/s (16.5MB/s)(15.7MiB/1002msec); 0 zone resets 00:12:07.861 slat (usec): min=5, max=11612, avg=117.47, stdev=538.82 00:12:07.861 clat (usec): min=906, max=26666, avg=15682.52, stdev=4138.57 00:12:07.861 lat (usec): min=4678, max=26676, avg=15799.99, stdev=4164.53 00:12:07.861 clat percentiles (usec): 00:12:07.861 | 1.00th=[ 5932], 5.00th=[ 7963], 10.00th=[10290], 20.00th=[13173], 00:12:07.861 | 30.00th=[13960], 40.00th=[14746], 50.00th=[15008], 60.00th=[15533], 00:12:07.861 | 70.00th=[19006], 80.00th=[20579], 90.00th=[21103], 95.00th=[21627], 00:12:07.861 | 99.00th=[23462], 99.50th=[24511], 99.90th=[26084], 99.95th=[26608], 00:12:07.861 | 99.99th=[26608] 00:12:07.861 bw ( KiB/s): min=19056, max=19056, per=32.49%, avg=19056.00, stdev= 0.00, samples=1 00:12:07.861 iops : min= 4764, max= 4764, avg=4764.00, stdev= 0.00, samples=1 00:12:07.861 lat (usec) : 1000=0.01% 00:12:07.861 lat (msec) : 10=5.37%, 20=64.48%, 50=30.13% 00:12:07.861 cpu : usr=3.20%, sys=10.19%, ctx=779, majf=0, minf=9 00:12:07.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:07.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.861 issued rwts: total=3584,4026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.861 job3: (groupid=0, jobs=1): err= 0: pid=76361: Tue May 14 22:59:19 2024 00:12:07.861 read: IOPS=2712, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1003msec) 00:12:07.861 slat (usec): min=3, max=5551, avg=173.17, stdev=636.57 00:12:07.861 clat (usec): min=326, max=28435, avg=21443.72, stdev=3401.37 00:12:07.861 lat (usec): min=3147, max=28449, avg=21616.89, stdev=3378.18 00:12:07.861 clat percentiles (usec): 00:12:07.861 | 1.00th=[ 3654], 5.00th=[16909], 10.00th=[18482], 20.00th=[19530], 00:12:07.861 | 30.00th=[20317], 40.00th=[21103], 50.00th=[21890], 60.00th=[22414], 00:12:07.861 | 70.00th=[23200], 80.00th=[23987], 90.00th=[24511], 95.00th=[25822], 00:12:07.861 | 99.00th=[26346], 99.50th=[26346], 99.90th=[28443], 99.95th=[28443], 00:12:07.861 | 99.99th=[28443] 00:12:07.861 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:12:07.861 slat (usec): min=4, max=6111, avg=164.92, stdev=673.68 00:12:07.861 clat (usec): min=14748, max=29371, avg=22026.51, stdev=2306.44 00:12:07.861 lat (usec): min=15472, max=29655, avg=22191.43, stdev=2231.96 00:12:07.861 clat percentiles (usec): 00:12:07.861 | 1.00th=[16581], 5.00th=[18744], 10.00th=[19268], 20.00th=[19792], 00:12:07.861 | 30.00th=[20841], 40.00th=[21103], 50.00th=[22414], 60.00th=[22938], 00:12:07.861 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25035], 95.00th=[25560], 00:12:07.861 | 99.00th=[27395], 99.50th=[28443], 99.90th=[29230], 99.95th=[29492], 00:12:07.861 | 99.99th=[29492] 00:12:07.861 bw ( KiB/s): min=12288, max=12312, per=20.97%, avg=12300.00, stdev=16.97, samples=2 00:12:07.861 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:12:07.861 lat (usec) : 500=0.02% 00:12:07.861 lat (msec) : 4=0.55%, 10=0.55%, 20=22.56%, 50=76.32% 00:12:07.861 cpu : usr=2.40%, sys=8.98%, ctx=970, majf=0, minf=17 00:12:07.861 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:07.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.861 issued rwts: total=2721,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.861 00:12:07.861 Run status group 0 (all jobs): 00:12:07.861 READ: bw=50.9MiB/s (53.4MB/s), 10.4MiB/s-16.0MiB/s (10.9MB/s-16.8MB/s), io=51.1MiB (53.6MB), run=1001-1004msec 00:12:07.861 WRITE: bw=57.3MiB/s (60.1MB/s), 12.0MiB/s-17.8MiB/s (12.5MB/s-18.6MB/s), io=57.5MiB (60.3MB), run=1001-1004msec 00:12:07.861 00:12:07.861 Disk stats (read/write): 00:12:07.861 nvme0n1: ios=2359/2560, merge=0/0, ticks=12416/12236, in_queue=24652, util=88.28% 00:12:07.861 nvme0n2: ios=3732/4096, merge=0/0, ticks=12518/12074, in_queue=24592, util=88.37% 00:12:07.861 nvme0n3: ios=3178/3584, merge=0/0, ticks=36461/37941, in_queue=74402, util=89.08% 00:12:07.861 nvme0n4: ios=2338/2560, merge=0/0, ticks=12302/12818, in_queue=25120, util=89.53% 00:12:07.861 22:59:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:07.861 [global] 00:12:07.861 thread=1 00:12:07.861 invalidate=1 00:12:07.861 rw=randwrite 00:12:07.861 time_based=1 00:12:07.861 runtime=1 00:12:07.861 ioengine=libaio 00:12:07.861 direct=1 00:12:07.861 bs=4096 00:12:07.861 iodepth=128 00:12:07.861 norandommap=0 00:12:07.861 numjobs=1 00:12:07.861 00:12:07.861 verify_dump=1 00:12:07.861 verify_backlog=512 00:12:07.861 verify_state_save=0 00:12:07.861 do_verify=1 00:12:07.861 verify=crc32c-intel 00:12:07.861 [job0] 00:12:07.861 filename=/dev/nvme0n1 00:12:07.861 [job1] 00:12:07.861 filename=/dev/nvme0n2 00:12:07.861 [job2] 00:12:07.861 filename=/dev/nvme0n3 00:12:07.861 [job3] 00:12:07.861 filename=/dev/nvme0n4 00:12:07.861 Could not set queue depth (nvme0n1) 00:12:07.861 Could not set queue depth (nvme0n2) 00:12:07.861 Could not set queue depth (nvme0n3) 00:12:07.861 Could not set queue depth (nvme0n4) 00:12:07.861 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.861 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.861 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.861 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.861 fio-3.35 00:12:07.861 Starting 4 threads 00:12:09.236 00:12:09.236 job0: (groupid=0, jobs=1): err= 0: pid=76420: Tue May 14 22:59:21 2024 00:12:09.236 read: IOPS=2203, BW=8814KiB/s (9026kB/s)(8832KiB/1002msec) 00:12:09.236 slat (usec): min=8, max=8811, avg=204.51, stdev=897.22 00:12:09.237 clat (usec): min=883, max=35928, avg=24810.54, stdev=3831.54 00:12:09.237 lat (usec): min=5099, max=36845, avg=25015.04, stdev=3805.55 00:12:09.237 clat percentiles (usec): 00:12:09.237 | 1.00th=[ 6849], 5.00th=[19530], 10.00th=[21365], 20.00th=[22676], 00:12:09.237 | 30.00th=[23725], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:12:09.237 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[30540], 00:12:09.237 | 99.00th=[32113], 99.50th=[32375], 99.90th=[35914], 99.95th=[35914], 00:12:09.237 | 99.99th=[35914] 00:12:09.237 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:12:09.237 slat (usec): min=14, max=5997, avg=207.56, stdev=774.95 00:12:09.237 clat (usec): min=13229, max=39721, avg=27953.77, stdev=5417.80 00:12:09.237 lat (usec): min=15119, max=39744, avg=28161.33, stdev=5413.28 00:12:09.237 clat percentiles (usec): 00:12:09.237 | 1.00th=[15926], 5.00th=[19268], 10.00th=[21103], 20.00th=[23725], 00:12:09.237 | 30.00th=[24773], 40.00th=[26346], 50.00th=[27919], 60.00th=[28181], 00:12:09.237 | 70.00th=[30802], 80.00th=[32900], 90.00th=[35914], 95.00th=[38011], 00:12:09.237 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:12:09.237 | 99.99th=[39584] 00:12:09.237 bw ( KiB/s): min= 9856, max=10624, per=15.84%, avg=10240.00, stdev=543.06, samples=2 00:12:09.237 iops : min= 2464, max= 2656, avg=2560.00, stdev=135.76, samples=2 00:12:09.237 lat (usec) : 1000=0.02% 00:12:09.237 lat (msec) : 10=0.67%, 20=4.91%, 50=94.40% 00:12:09.237 cpu : usr=2.00%, sys=6.49%, ctx=356, majf=0, minf=13 00:12:09.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:12:09.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.237 issued rwts: total=2208,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.237 job1: (groupid=0, jobs=1): err= 0: pid=76421: Tue May 14 22:59:21 2024 00:12:09.237 read: IOPS=5330, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1002msec) 00:12:09.237 slat (usec): min=8, max=2735, avg=88.83, stdev=397.53 00:12:09.237 clat (usec): min=1283, max=13904, avg=11721.46, stdev=1110.17 00:12:09.237 lat (usec): min=1295, max=13920, avg=11810.29, stdev=1052.33 00:12:09.237 clat percentiles (usec): 00:12:09.237 | 1.00th=[ 5604], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11469], 00:12:09.237 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[11994], 00:12:09.237 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12518], 95.00th=[12649], 00:12:09.237 | 99.00th=[13173], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:12:09.237 | 99.99th=[13960] 00:12:09.237 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:12:09.237 slat (usec): min=12, max=2620, avg=85.05, stdev=308.62 00:12:09.237 clat (usec): min=8797, max=13876, avg=11357.29, stdev=1003.32 00:12:09.237 lat (usec): min=8897, max=13896, avg=11442.34, stdev=1003.03 00:12:09.237 clat percentiles (usec): 00:12:09.237 | 1.00th=[ 9503], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10290], 00:12:09.237 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11600], 60.00th=[11731], 00:12:09.237 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12649], 95.00th=[12911], 00:12:09.237 | 99.00th=[13435], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:12:09.237 | 99.99th=[13829] 00:12:09.237 bw ( KiB/s): min=22396, max=22704, per=34.88%, avg=22550.00, stdev=217.79, samples=2 00:12:09.237 iops : min= 5599, max= 5676, avg=5637.50, stdev=54.45, samples=2 00:12:09.237 lat (msec) : 2=0.07%, 4=0.15%, 10=6.36%, 20=93.41% 00:12:09.237 cpu : usr=5.00%, sys=16.48%, ctx=638, majf=0, minf=15 00:12:09.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:09.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.237 issued rwts: total=5341,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.237 job2: (groupid=0, jobs=1): err= 0: pid=76422: Tue May 14 22:59:21 2024 00:12:09.237 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:12:09.237 slat (usec): min=7, max=9431, avg=186.70, stdev=956.68 00:12:09.237 clat (usec): min=15843, max=34135, avg=24678.73, stdev=3322.30 00:12:09.237 lat (usec): min=19262, max=34188, avg=24865.43, stdev=3206.83 00:12:09.237 clat percentiles (usec): 00:12:09.237 | 1.00th=[17695], 5.00th=[20579], 10.00th=[21627], 20.00th=[22152], 00:12:09.237 | 30.00th=[22414], 40.00th=[22676], 50.00th=[23725], 60.00th=[25035], 00:12:09.237 | 70.00th=[26346], 80.00th=[27395], 90.00th=[29230], 95.00th=[30802], 00:12:09.237 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[34341], 00:12:09.237 | 99.99th=[34341] 00:12:09.237 write: IOPS=2907, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1002msec); 0 zone resets 00:12:09.237 slat (usec): min=16, max=9039, avg=170.28, stdev=808.50 00:12:09.237 clat (usec): min=623, max=30748, avg=21560.41, stdev=3648.73 00:12:09.237 lat (usec): min=5814, max=30773, avg=21730.69, stdev=3577.22 00:12:09.237 clat percentiles (usec): 00:12:09.237 | 1.00th=[ 6652], 5.00th=[17695], 10.00th=[18220], 20.00th=[18482], 00:12:09.237 | 30.00th=[19530], 40.00th=[20579], 50.00th=[21627], 60.00th=[22414], 00:12:09.237 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25297], 95.00th=[28181], 00:12:09.237 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:12:09.237 | 99.99th=[30802] 00:12:09.237 bw ( KiB/s): min=10504, max=11807, per=17.26%, avg=11155.50, stdev=921.36, samples=2 00:12:09.237 iops : min= 2626, max= 2951, avg=2788.50, stdev=229.81, samples=2 00:12:09.237 lat (usec) : 750=0.02% 00:12:09.237 lat (msec) : 10=0.58%, 20=20.23%, 50=79.17% 00:12:09.237 cpu : usr=2.50%, sys=9.49%, ctx=174, majf=0, minf=11 00:12:09.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:09.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.237 issued rwts: total=2560,2913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.237 job3: (groupid=0, jobs=1): err= 0: pid=76423: Tue May 14 22:59:21 2024 00:12:09.237 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:12:09.237 slat (usec): min=6, max=3164, avg=100.50, stdev=455.77 00:12:09.237 clat (usec): min=10134, max=16517, avg=13377.65, stdev=772.09 00:12:09.237 lat (usec): min=10510, max=19344, avg=13478.15, stdev=670.36 00:12:09.237 clat percentiles (usec): 00:12:09.237 | 1.00th=[10683], 5.00th=[11469], 10.00th=[12387], 20.00th=[13042], 00:12:09.237 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13435], 60.00th=[13566], 00:12:09.237 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14091], 95.00th=[14353], 00:12:09.237 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15533], 99.95th=[15533], 00:12:09.237 | 99.99th=[16581] 00:12:09.237 write: IOPS=5088, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1003msec); 0 zone resets 00:12:09.237 slat (usec): min=11, max=3194, avg=97.32, stdev=411.47 00:12:09.237 clat (usec): min=2048, max=15476, avg=12739.10, stdev=1551.14 00:12:09.237 lat (usec): min=2676, max=15496, avg=12836.43, stdev=1547.74 00:12:09.237 clat percentiles (usec): 00:12:09.237 | 1.00th=[ 7701], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:12:09.237 | 30.00th=[11731], 40.00th=[11994], 50.00th=[13042], 60.00th=[13566], 00:12:09.237 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14484], 95.00th=[14746], 00:12:09.237 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15270], 99.95th=[15401], 00:12:09.237 | 99.99th=[15533] 00:12:09.237 bw ( KiB/s): min=19336, max=20480, per=30.80%, avg=19908.00, stdev=808.93, samples=2 00:12:09.237 iops : min= 4834, max= 5120, avg=4977.00, stdev=202.23, samples=2 00:12:09.237 lat (msec) : 4=0.18%, 10=0.63%, 20=99.20% 00:12:09.237 cpu : usr=3.19%, sys=15.37%, ctx=521, majf=0, minf=11 00:12:09.237 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:09.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.237 issued rwts: total=4608,5104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.237 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.237 00:12:09.237 Run status group 0 (all jobs): 00:12:09.237 READ: bw=57.3MiB/s (60.1MB/s), 8814KiB/s-20.8MiB/s (9026kB/s-21.8MB/s), io=57.5MiB (60.3MB), run=1002-1003msec 00:12:09.237 WRITE: bw=63.1MiB/s (66.2MB/s), 9.98MiB/s-22.0MiB/s (10.5MB/s-23.0MB/s), io=63.3MiB (66.4MB), run=1002-1003msec 00:12:09.237 00:12:09.237 Disk stats (read/write): 00:12:09.237 nvme0n1: ios=2098/2143, merge=0/0, ticks=12626/14044, in_queue=26670, util=89.67% 00:12:09.237 nvme0n2: ios=4657/4916, merge=0/0, ticks=12391/12039, in_queue=24430, util=89.80% 00:12:09.237 nvme0n3: ios=2165/2560, merge=0/0, ticks=12862/12566, in_queue=25428, util=89.74% 00:12:09.237 nvme0n4: ios=4096/4354, merge=0/0, ticks=12551/12121, in_queue=24672, util=89.79% 00:12:09.237 22:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:09.237 22:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=76436 00:12:09.237 22:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:09.237 22:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:09.237 [global] 00:12:09.237 thread=1 00:12:09.237 invalidate=1 00:12:09.237 rw=read 00:12:09.237 time_based=1 00:12:09.237 runtime=10 00:12:09.237 ioengine=libaio 00:12:09.237 direct=1 00:12:09.237 bs=4096 00:12:09.237 iodepth=1 00:12:09.237 norandommap=1 00:12:09.237 numjobs=1 00:12:09.237 00:12:09.237 [job0] 00:12:09.237 filename=/dev/nvme0n1 00:12:09.237 [job1] 00:12:09.237 filename=/dev/nvme0n2 00:12:09.237 [job2] 00:12:09.237 filename=/dev/nvme0n3 00:12:09.237 [job3] 00:12:09.237 filename=/dev/nvme0n4 00:12:09.237 Could not set queue depth (nvme0n1) 00:12:09.237 Could not set queue depth (nvme0n2) 00:12:09.237 Could not set queue depth (nvme0n3) 00:12:09.237 Could not set queue depth (nvme0n4) 00:12:09.237 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.237 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.237 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.237 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.237 fio-3.35 00:12:09.237 Starting 4 threads 00:12:12.517 22:59:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:12.517 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=63610880, buflen=4096 00:12:12.518 fio: pid=76479, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:12.518 22:59:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:12.775 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=42115072, buflen=4096 00:12:12.775 fio: pid=76478, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:12.775 22:59:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.775 22:59:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:13.032 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3043328, buflen=4096 00:12:13.032 fio: pid=76476, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:13.032 22:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.032 22:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:13.289 fio: pid=76477, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:13.289 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=55115776, buflen=4096 00:12:13.289 00:12:13.289 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76476: Tue May 14 22:59:25 2024 00:12:13.289 read: IOPS=4842, BW=18.9MiB/s (19.8MB/s)(66.9MiB/3537msec) 00:12:13.289 slat (usec): min=12, max=13244, avg=19.86, stdev=164.74 00:12:13.289 clat (usec): min=134, max=3566, avg=184.88, stdev=70.99 00:12:13.289 lat (usec): min=148, max=13527, avg=204.74, stdev=180.98 00:12:13.289 clat percentiles (usec): 00:12:13.289 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:12:13.289 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:12:13.289 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 210], 95.00th=[ 253], 00:12:13.289 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 807], 99.95th=[ 1483], 00:12:13.289 | 99.99th=[ 3064] 00:12:13.289 bw ( KiB/s): min=16800, max=20992, per=33.93%, avg=19686.67, stdev=1690.29, samples=6 00:12:13.289 iops : min= 4200, max= 5248, avg=4921.67, stdev=422.57, samples=6 00:12:13.289 lat (usec) : 250=94.86%, 500=4.85%, 750=0.16%, 1000=0.07% 00:12:13.289 lat (msec) : 2=0.04%, 4=0.02% 00:12:13.289 cpu : usr=2.04%, sys=6.87%, ctx=17136, majf=0, minf=1 00:12:13.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:13.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.289 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.289 issued rwts: total=17128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:13.289 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76477: Tue May 14 22:59:25 2024 00:12:13.289 read: IOPS=3461, BW=13.5MiB/s (14.2MB/s)(52.6MiB/3888msec) 00:12:13.289 slat (usec): min=10, max=12814, avg=19.39, stdev=205.25 00:12:13.289 clat (usec): min=130, max=15854, avg=267.81, stdev=156.83 00:12:13.289 lat (usec): min=144, max=15871, avg=287.20, stdev=258.22 00:12:13.289 clat percentiles (usec): 00:12:13.289 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 161], 00:12:13.289 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:12:13.289 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 330], 00:12:13.289 | 99.00th=[ 379], 99.50th=[ 416], 99.90th=[ 562], 99.95th=[ 1090], 00:12:13.289 | 99.99th=[ 3326] 00:12:13.289 bw ( KiB/s): min=12240, max=16006, per=22.42%, avg=13008.86, stdev=1330.91, samples=7 00:12:13.289 iops : min= 3060, max= 4001, avg=3252.14, stdev=332.54, samples=7 00:12:13.289 lat (usec) : 250=24.90%, 500=74.93%, 750=0.07%, 1000=0.03% 00:12:13.289 lat (msec) : 2=0.01%, 4=0.03%, 20=0.01% 00:12:13.289 cpu : usr=1.13%, sys=4.76%, ctx=13483, majf=0, minf=1 00:12:13.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:13.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.289 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.289 issued rwts: total=13457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:13.289 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76478: Tue May 14 22:59:25 2024 00:12:13.289 read: IOPS=3123, BW=12.2MiB/s (12.8MB/s)(40.2MiB/3292msec) 00:12:13.289 slat (usec): min=10, max=7722, avg=17.17, stdev=104.26 00:12:13.289 clat (usec): min=50, max=2664, avg=301.19, stdev=46.94 00:12:13.289 lat (usec): min=174, max=7932, avg=318.36, stdev=113.43 00:12:13.289 clat percentiles (usec): 00:12:13.289 | 1.00th=[ 204], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:12:13.289 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:12:13.289 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 334], 00:12:13.289 | 99.00th=[ 383], 99.50th=[ 404], 99.90th=[ 519], 99.95th=[ 857], 00:12:13.289 | 99.99th=[ 2540] 00:12:13.289 bw ( KiB/s): min=12240, max=12704, per=21.54%, avg=12500.00, stdev=177.00, samples=6 00:12:13.289 iops : min= 3060, max= 3176, avg=3125.00, stdev=44.25, samples=6 00:12:13.289 lat (usec) : 100=0.01%, 250=1.86%, 500=98.00%, 750=0.06%, 1000=0.03% 00:12:13.289 lat (msec) : 2=0.01%, 4=0.03% 00:12:13.289 cpu : usr=1.19%, sys=4.16%, ctx=10293, majf=0, minf=1 00:12:13.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:13.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.289 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.289 issued rwts: total=10283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:13.289 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76479: Tue May 14 22:59:25 2024 00:12:13.289 read: IOPS=5134, BW=20.1MiB/s (21.0MB/s)(60.7MiB/3025msec) 00:12:13.289 slat (nsec): min=13027, max=74209, avg=16161.58, stdev=3583.64 00:12:13.289 clat (usec): min=147, max=983, avg=176.99, stdev=22.64 00:12:13.289 lat (usec): min=164, max=999, avg=193.15, stdev=23.33 00:12:13.289 clat percentiles (usec): 00:12:13.289 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:12:13.289 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:12:13.289 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 225], 00:12:13.289 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 334], 99.95th=[ 441], 00:12:13.289 | 99.99th=[ 750] 00:12:13.289 bw ( KiB/s): min=18584, max=21304, per=35.49%, avg=20592.00, stdev=1025.85, samples=6 00:12:13.289 iops : min= 4646, max= 5326, avg=5148.00, stdev=256.46, samples=6 00:12:13.289 lat (usec) : 250=98.24%, 500=1.72%, 750=0.03%, 1000=0.01% 00:12:13.289 cpu : usr=1.22%, sys=6.98%, ctx=15536, majf=0, minf=1 00:12:13.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:13.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.289 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.289 issued rwts: total=15531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:13.289 00:12:13.289 Run status group 0 (all jobs): 00:12:13.289 READ: bw=56.7MiB/s (59.4MB/s), 12.2MiB/s-20.1MiB/s (12.8MB/s-21.0MB/s), io=220MiB (231MB), run=3025-3888msec 00:12:13.289 00:12:13.289 Disk stats (read/write): 00:12:13.289 nvme0n1: ios=16425/0, merge=0/0, ticks=3086/0, in_queue=3086, util=95.14% 00:12:13.289 nvme0n2: ios=13350/0, merge=0/0, ticks=3542/0, in_queue=3542, util=95.74% 00:12:13.289 nvme0n3: ios=9719/0, merge=0/0, ticks=2925/0, in_queue=2925, util=96.40% 00:12:13.289 nvme0n4: ios=14769/0, merge=0/0, ticks=2703/0, in_queue=2703, util=96.76% 00:12:13.289 22:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.289 22:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:13.547 22:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.548 22:59:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:13.806 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.806 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:14.063 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:14.063 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:14.322 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:14.322 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:14.580 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:14.580 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 76436 00:12:14.580 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:14.580 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.580 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.580 22:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:12:14.580 22:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:14.580 22:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.839 22:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.839 22:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:14.839 22:59:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:12:14.839 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:14.839 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:14.839 nvmf hotplug test: fio failed as expected 00:12:14.839 22:59:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.097 rmmod nvme_tcp 00:12:15.097 rmmod nvme_fabrics 00:12:15.097 rmmod nvme_keyring 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 75941 ']' 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 75941 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 75941 ']' 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 75941 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75941 00:12:15.097 killing process with pid 75941 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75941' 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 75941 00:12:15.097 [2024-05-14 22:59:27.328860] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:15.097 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 75941 00:12:15.356 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:15.357 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:15.357 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:15.357 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.357 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:15.357 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.357 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.357 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.357 22:59:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:15.357 ************************************ 00:12:15.357 END TEST nvmf_fio_target 00:12:15.357 ************************************ 00:12:15.357 00:12:15.357 real 0m20.028s 00:12:15.357 user 1m17.726s 00:12:15.357 sys 0m8.720s 00:12:15.357 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:15.357 22:59:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.357 22:59:27 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:15.357 22:59:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:15.357 22:59:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:15.357 22:59:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.357 ************************************ 00:12:15.357 START TEST nvmf_bdevio 00:12:15.357 ************************************ 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:15.357 * Looking for test storage... 00:12:15.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:15.357 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:15.615 Cannot find device "nvmf_tgt_br" 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.615 Cannot find device "nvmf_tgt_br2" 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:15.615 Cannot find device "nvmf_tgt_br" 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:15.615 Cannot find device "nvmf_tgt_br2" 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:15.615 22:59:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:15.873 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:15.873 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:15.873 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:15.873 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.873 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:15.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:12:15.873 00:12:15.873 --- 10.0.0.2 ping statistics --- 00:12:15.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.873 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:15.873 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:15.873 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:15.873 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:15.873 00:12:15.873 --- 10.0.0.3 ping statistics --- 00:12:15.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.873 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:15.873 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:15.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:15.873 00:12:15.873 --- 10.0.0.1 ping statistics --- 00:12:15.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.874 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=76806 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 76806 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 76806 ']' 00:12:15.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:15.874 22:59:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:15.874 [2024-05-14 22:59:28.151806] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:15.874 [2024-05-14 22:59:28.151902] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.132 [2024-05-14 22:59:28.292266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.132 [2024-05-14 22:59:28.363585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.132 [2024-05-14 22:59:28.364132] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.132 [2024-05-14 22:59:28.364736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.132 [2024-05-14 22:59:28.365193] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.132 [2024-05-14 22:59:28.365583] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.132 [2024-05-14 22:59:28.366034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:16.132 [2024-05-14 22:59:28.366088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:16.132 [2024-05-14 22:59:28.366175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:16.132 [2024-05-14 22:59:28.366185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.066 [2024-05-14 22:59:29.223688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.066 Malloc0 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:17.066 [2024-05-14 22:59:29.291391] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:17.066 [2024-05-14 22:59:29.291922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.066 22:59:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.067 22:59:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:17.067 22:59:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:17.067 22:59:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:17.067 22:59:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:17.067 22:59:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:17.067 22:59:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:17.067 { 00:12:17.067 "params": { 00:12:17.067 "name": "Nvme$subsystem", 00:12:17.067 "trtype": "$TEST_TRANSPORT", 00:12:17.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:17.067 "adrfam": "ipv4", 00:12:17.067 "trsvcid": "$NVMF_PORT", 00:12:17.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:17.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:17.067 "hdgst": ${hdgst:-false}, 00:12:17.067 "ddgst": ${ddgst:-false} 00:12:17.067 }, 00:12:17.067 "method": "bdev_nvme_attach_controller" 00:12:17.067 } 00:12:17.067 EOF 00:12:17.067 )") 00:12:17.067 22:59:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:17.067 22:59:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:17.067 22:59:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:17.067 22:59:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:17.067 "params": { 00:12:17.067 "name": "Nvme1", 00:12:17.067 "trtype": "tcp", 00:12:17.067 "traddr": "10.0.0.2", 00:12:17.067 "adrfam": "ipv4", 00:12:17.067 "trsvcid": "4420", 00:12:17.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:17.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:17.067 "hdgst": false, 00:12:17.067 "ddgst": false 00:12:17.067 }, 00:12:17.067 "method": "bdev_nvme_attach_controller" 00:12:17.067 }' 00:12:17.067 [2024-05-14 22:59:29.349699] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:17.067 [2024-05-14 22:59:29.349797] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76866 ] 00:12:17.325 [2024-05-14 22:59:29.491924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:17.325 [2024-05-14 22:59:29.553977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.325 [2024-05-14 22:59:29.554076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.325 [2024-05-14 22:59:29.554082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.325 I/O targets: 00:12:17.325 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:17.325 00:12:17.325 00:12:17.325 CUnit - A unit testing framework for C - Version 2.1-3 00:12:17.325 http://cunit.sourceforge.net/ 00:12:17.325 00:12:17.325 00:12:17.325 Suite: bdevio tests on: Nvme1n1 00:12:17.583 Test: blockdev write read block ...passed 00:12:17.583 Test: blockdev write zeroes read block ...passed 00:12:17.583 Test: blockdev write zeroes read no split ...passed 00:12:17.583 Test: blockdev write zeroes read split ...passed 00:12:17.583 Test: blockdev write zeroes read split partial ...passed 00:12:17.583 Test: blockdev reset ...[2024-05-14 22:59:29.804503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:17.583 [2024-05-14 22:59:29.804634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fe660 (9): Bad file descriptor 00:12:17.583 [2024-05-14 22:59:29.820031] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:17.583 passed 00:12:17.583 Test: blockdev write read 8 blocks ...passed 00:12:17.583 Test: blockdev write read size > 128k ...passed 00:12:17.583 Test: blockdev write read invalid size ...passed 00:12:17.583 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:17.583 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:17.583 Test: blockdev write read max offset ...passed 00:12:17.583 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:17.583 Test: blockdev writev readv 8 blocks ...passed 00:12:17.583 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.840 Test: blockdev writev readv block ...passed 00:12:17.840 Test: blockdev writev readv size > 128k ...passed 00:12:17.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.840 Test: blockdev comparev and writev ...[2024-05-14 22:59:29.992261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.840 [2024-05-14 22:59:29.992329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:17.840 [2024-05-14 22:59:29.992357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.840 [2024-05-14 22:59:29.992372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:17.840 [2024-05-14 22:59:29.992995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.840 [2024-05-14 22:59:29.993031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:17.840 [2024-05-14 22:59:29.993053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.841 [2024-05-14 22:59:29.993067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:17.841 [2024-05-14 22:59:29.993533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.841 [2024-05-14 22:59:29.993566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:17.841 [2024-05-14 22:59:29.993589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.841 [2024-05-14 22:59:29.993603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:17.841 [2024-05-14 22:59:29.994038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.841 [2024-05-14 22:59:29.994077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:17.841 [2024-05-14 22:59:29.994099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.841 [2024-05-14 22:59:29.994113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:17.841 passed 00:12:17.841 Test: blockdev nvme passthru rw ...passed 00:12:17.841 Test: blockdev nvme passthru vendor specific ...[2024-05-14 22:59:30.078171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.841 [2024-05-14 22:59:30.078221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:17.841 [2024-05-14 22:59:30.078559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.841 [2024-05-14 22:59:30.078589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:17.841 [2024-05-14 22:59:30.078793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.841 [2024-05-14 22:59:30.078823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:17.841 [2024-05-14 22:59:30.079025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.841 [2024-05-14 22:59:30.079052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:17.841 passed 00:12:17.841 Test: blockdev nvme admin passthru ...passed 00:12:17.841 Test: blockdev copy ...passed 00:12:17.841 00:12:17.841 Run Summary: Type Total Ran Passed Failed Inactive 00:12:17.841 suites 1 1 n/a 0 0 00:12:17.841 tests 23 23 23 0 0 00:12:17.841 asserts 152 152 152 0 n/a 00:12:17.841 00:12:17.841 Elapsed time = 0.894 seconds 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.099 rmmod nvme_tcp 00:12:18.099 rmmod nvme_fabrics 00:12:18.099 rmmod nvme_keyring 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 76806 ']' 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 76806 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 76806 ']' 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 76806 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76806 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:12:18.099 killing process with pid 76806 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76806' 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 76806 00:12:18.099 [2024-05-14 22:59:30.431481] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:18.099 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 76806 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:18.357 00:12:18.357 real 0m3.062s 00:12:18.357 user 0m10.866s 00:12:18.357 sys 0m0.718s 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:18.357 22:59:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:18.357 ************************************ 00:12:18.357 END TEST nvmf_bdevio 00:12:18.357 ************************************ 00:12:18.357 22:59:30 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:12:18.357 22:59:30 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:18.357 22:59:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:18.357 22:59:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:18.357 22:59:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.357 ************************************ 00:12:18.357 START TEST nvmf_bdevio_no_huge 00:12:18.357 ************************************ 00:12:18.358 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:18.617 * Looking for test storage... 00:12:18.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:18.617 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:18.618 Cannot find device "nvmf_tgt_br" 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.618 Cannot find device "nvmf_tgt_br2" 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:18.618 Cannot find device "nvmf_tgt_br" 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:18.618 Cannot find device "nvmf_tgt_br2" 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:18.618 22:59:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:18.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:12:18.878 00:12:18.878 --- 10.0.0.2 ping statistics --- 00:12:18.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.878 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:18.878 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:18.878 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:18.878 00:12:18.878 --- 10.0.0.3 ping statistics --- 00:12:18.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.878 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:18.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:18.878 00:12:18.878 --- 10.0.0.1 ping statistics --- 00:12:18.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.878 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=77039 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 77039 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 77039 ']' 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:18.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:18.878 22:59:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:19.137 [2024-05-14 22:59:31.284678] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:19.137 [2024-05-14 22:59:31.284785] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:19.137 [2024-05-14 22:59:31.433548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.395 [2024-05-14 22:59:31.567234] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.395 [2024-05-14 22:59:31.567290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.395 [2024-05-14 22:59:31.567303] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.395 [2024-05-14 22:59:31.567313] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.395 [2024-05-14 22:59:31.567323] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.395 [2024-05-14 22:59:31.567492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:19.395 [2024-05-14 22:59:31.568181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:19.395 [2024-05-14 22:59:31.568329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:19.395 [2024-05-14 22:59:31.568706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.961 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:19.961 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:12:19.961 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:19.961 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.961 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:20.220 [2024-05-14 22:59:32.381104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:20.220 Malloc0 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:20.220 [2024-05-14 22:59:32.418459] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:20.220 [2024-05-14 22:59:32.418707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:20.220 { 00:12:20.220 "params": { 00:12:20.220 "name": "Nvme$subsystem", 00:12:20.220 "trtype": "$TEST_TRANSPORT", 00:12:20.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.220 "adrfam": "ipv4", 00:12:20.220 "trsvcid": "$NVMF_PORT", 00:12:20.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.220 "hdgst": ${hdgst:-false}, 00:12:20.220 "ddgst": ${ddgst:-false} 00:12:20.220 }, 00:12:20.220 "method": "bdev_nvme_attach_controller" 00:12:20.220 } 00:12:20.220 EOF 00:12:20.220 )") 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:20.220 22:59:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:20.220 "params": { 00:12:20.220 "name": "Nvme1", 00:12:20.220 "trtype": "tcp", 00:12:20.220 "traddr": "10.0.0.2", 00:12:20.220 "adrfam": "ipv4", 00:12:20.220 "trsvcid": "4420", 00:12:20.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:20.220 "hdgst": false, 00:12:20.220 "ddgst": false 00:12:20.220 }, 00:12:20.220 "method": "bdev_nvme_attach_controller" 00:12:20.220 }' 00:12:20.220 [2024-05-14 22:59:32.487182] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:20.220 [2024-05-14 22:59:32.487296] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid77099 ] 00:12:20.479 [2024-05-14 22:59:32.640970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:20.480 [2024-05-14 22:59:32.775362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.480 [2024-05-14 22:59:32.775502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.480 [2024-05-14 22:59:32.776248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.739 I/O targets: 00:12:20.739 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:20.739 00:12:20.739 00:12:20.739 CUnit - A unit testing framework for C - Version 2.1-3 00:12:20.739 http://cunit.sourceforge.net/ 00:12:20.739 00:12:20.739 00:12:20.739 Suite: bdevio tests on: Nvme1n1 00:12:20.739 Test: blockdev write read block ...passed 00:12:20.739 Test: blockdev write zeroes read block ...passed 00:12:20.739 Test: blockdev write zeroes read no split ...passed 00:12:20.739 Test: blockdev write zeroes read split ...passed 00:12:20.739 Test: blockdev write zeroes read split partial ...passed 00:12:20.739 Test: blockdev reset ...[2024-05-14 22:59:33.085508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:20.739 [2024-05-14 22:59:33.085636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a4360 (9): Bad file descriptor 00:12:20.739 [2024-05-14 22:59:33.102022] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:20.739 passed 00:12:20.739 Test: blockdev write read 8 blocks ...passed 00:12:20.739 Test: blockdev write read size > 128k ...passed 00:12:20.739 Test: blockdev write read invalid size ...passed 00:12:20.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:20.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:20.997 Test: blockdev write read max offset ...passed 00:12:20.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:20.997 Test: blockdev writev readv 8 blocks ...passed 00:12:20.997 Test: blockdev writev readv 30 x 1block ...passed 00:12:20.997 Test: blockdev writev readv block ...passed 00:12:20.997 Test: blockdev writev readv size > 128k ...passed 00:12:20.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:20.998 Test: blockdev comparev and writev ...[2024-05-14 22:59:33.278241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:20.998 [2024-05-14 22:59:33.278609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:20.998 [2024-05-14 22:59:33.278904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:20.998 [2024-05-14 22:59:33.279114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:20.998 [2024-05-14 22:59:33.279493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:20.998 [2024-05-14 22:59:33.279695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:20.998 [2024-05-14 22:59:33.279914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:20.998 [2024-05-14 22:59:33.280216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:20.998 [2024-05-14 22:59:33.280634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:20.998 [2024-05-14 22:59:33.280844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:20.998 [2024-05-14 22:59:33.281047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:20.998 [2024-05-14 22:59:33.281256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:20.998 [2024-05-14 22:59:33.281744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:20.998 [2024-05-14 22:59:33.281871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:20.998 [2024-05-14 22:59:33.281952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:20.998 [2024-05-14 22:59:33.282119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:20.998 passed 00:12:20.998 Test: blockdev nvme passthru rw ...passed 00:12:20.998 Test: blockdev nvme passthru vendor specific ...[2024-05-14 22:59:33.365365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:20.998 [2024-05-14 22:59:33.365864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:20.998 [2024-05-14 22:59:33.366125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:20.998 [2024-05-14 22:59:33.366317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:20.998 [2024-05-14 22:59:33.366631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:20.998 [2024-05-14 22:59:33.366717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:20.998 [2024-05-14 22:59:33.366939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:20.998 [2024-05-14 22:59:33.367139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:12:20.998 Test: blockdev nvme admin passthru ...qhd:002f p:0 m:0 dnr:0 00:12:20.998 passed 00:12:21.262 Test: blockdev copy ...passed 00:12:21.262 00:12:21.263 Run Summary: Type Total Ran Passed Failed Inactive 00:12:21.263 suites 1 1 n/a 0 0 00:12:21.263 tests 23 23 23 0 0 00:12:21.263 asserts 152 152 152 0 n/a 00:12:21.263 00:12:21.263 Elapsed time = 0.943 seconds 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.525 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.525 rmmod nvme_tcp 00:12:21.525 rmmod nvme_fabrics 00:12:21.525 rmmod nvme_keyring 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 77039 ']' 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 77039 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 77039 ']' 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 77039 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77039 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:12:21.783 killing process with pid 77039 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77039' 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 77039 00:12:21.783 [2024-05-14 22:59:33.952264] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:21.783 22:59:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 77039 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:22.042 00:12:22.042 real 0m3.651s 00:12:22.042 user 0m13.114s 00:12:22.042 sys 0m1.281s 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:22.042 22:59:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:22.042 ************************************ 00:12:22.042 END TEST nvmf_bdevio_no_huge 00:12:22.042 ************************************ 00:12:22.042 22:59:34 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:22.042 22:59:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:22.042 22:59:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:22.042 22:59:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.042 ************************************ 00:12:22.042 START TEST nvmf_tls 00:12:22.042 ************************************ 00:12:22.042 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:22.301 * Looking for test storage... 00:12:22.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:22.301 Cannot find device "nvmf_tgt_br" 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:22.301 Cannot find device "nvmf_tgt_br2" 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:22.301 Cannot find device "nvmf_tgt_br" 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:22.301 Cannot find device "nvmf_tgt_br2" 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:22.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:22.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:22.301 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:22.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:12:22.605 00:12:22.605 --- 10.0.0.2 ping statistics --- 00:12:22.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.605 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:22.605 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:22.605 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:12:22.605 00:12:22.605 --- 10.0.0.3 ping statistics --- 00:12:22.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.605 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:22.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:12:22.605 00:12:22.605 --- 10.0.0.1 ping statistics --- 00:12:22.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.605 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:22.605 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77279 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77279 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77279 ']' 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:22.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:22.606 22:59:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:22.606 [2024-05-14 22:59:34.910873] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:22.606 [2024-05-14 22:59:34.910980] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.886 [2024-05-14 22:59:35.049493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.886 [2024-05-14 22:59:35.135673] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.886 [2024-05-14 22:59:35.135748] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.886 [2024-05-14 22:59:35.135789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.886 [2024-05-14 22:59:35.135806] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.886 [2024-05-14 22:59:35.135819] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.886 [2024-05-14 22:59:35.135856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.821 22:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:23.821 22:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:12:23.821 22:59:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.821 22:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.821 22:59:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:23.821 22:59:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.821 22:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:23.821 22:59:35 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:24.079 true 00:12:24.079 22:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:24.079 22:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:24.337 22:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:24.337 22:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:24.337 22:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:24.595 22:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:24.595 22:59:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:24.853 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:24.853 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:24.853 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:25.111 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:25.111 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:25.370 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:25.370 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:25.370 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:25.370 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:25.629 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:25.629 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:25.629 22:59:37 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:25.888 22:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:25.888 22:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:26.146 22:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:26.146 22:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:26.146 22:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:26.406 22:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:26.406 22:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:26.665 22:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:26.665 22:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:26.665 22:59:38 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:26.665 22:59:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:26.665 22:59:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:26.665 22:59:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:26.665 22:59:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:26.665 22:59:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:26.665 22:59:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:26.665 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:26.665 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:26.665 22:59:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:26.665 22:59:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:26.665 22:59:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:26.665 22:59:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:26.665 22:59:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:26.665 22:59:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:26.925 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:26.925 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:26.925 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.XF4ctfSlZ5 00:12:26.925 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:26.925 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.dqM0o0Y4Tm 00:12:26.925 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:26.925 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:26.925 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.XF4ctfSlZ5 00:12:26.925 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dqM0o0Y4Tm 00:12:26.925 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:27.187 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:27.446 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.XF4ctfSlZ5 00:12:27.446 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XF4ctfSlZ5 00:12:27.446 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:27.705 [2024-05-14 22:59:39.905515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.705 22:59:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:27.963 22:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:28.221 [2024-05-14 22:59:40.389585] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:28.221 [2024-05-14 22:59:40.389685] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:28.221 [2024-05-14 22:59:40.389880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.221 22:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:28.479 malloc0 00:12:28.479 22:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:28.738 22:59:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XF4ctfSlZ5 00:12:28.996 [2024-05-14 22:59:41.176481] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:28.996 22:59:41 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.XF4ctfSlZ5 00:12:41.205 Initializing NVMe Controllers 00:12:41.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:41.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:41.205 Initialization complete. Launching workers. 00:12:41.205 ======================================================== 00:12:41.205 Latency(us) 00:12:41.205 Device Information : IOPS MiB/s Average min max 00:12:41.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9251.93 36.14 6919.05 1696.81 15192.19 00:12:41.205 ======================================================== 00:12:41.205 Total : 9251.93 36.14 6919.05 1696.81 15192.19 00:12:41.205 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XF4ctfSlZ5 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XF4ctfSlZ5' 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77650 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77650 /var/tmp/bdevperf.sock 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77650 ']' 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:41.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.205 [2024-05-14 22:59:51.443613] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:41.205 [2024-05-14 22:59:51.443725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77650 ] 00:12:41.205 [2024-05-14 22:59:51.585930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.205 [2024-05-14 22:59:51.655705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:12:41.205 22:59:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XF4ctfSlZ5 00:12:41.205 [2024-05-14 22:59:51.991038] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:41.205 [2024-05-14 22:59:51.991195] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:41.205 TLSTESTn1 00:12:41.205 22:59:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:41.205 Running I/O for 10 seconds... 00:12:51.176 00:12:51.176 Latency(us) 00:12:51.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.176 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:51.176 Verification LBA range: start 0x0 length 0x2000 00:12:51.176 TLSTESTn1 : 10.02 3542.45 13.84 0.00 0.00 36059.27 8638.84 42181.35 00:12:51.176 =================================================================================================================== 00:12:51.176 Total : 3542.45 13.84 0.00 0.00 36059.27 8638.84 42181.35 00:12:51.176 0 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 77650 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77650 ']' 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77650 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77650 00:12:51.176 killing process with pid 77650 00:12:51.176 Received shutdown signal, test time was about 10.000000 seconds 00:12:51.176 00:12:51.176 Latency(us) 00:12:51.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.176 =================================================================================================================== 00:12:51.176 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77650' 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77650 00:12:51.176 [2024-05-14 23:00:02.246042] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77650 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dqM0o0Y4Tm 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dqM0o0Y4Tm 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dqM0o0Y4Tm 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dqM0o0Y4Tm' 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77788 00:12:51.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77788 /var/tmp/bdevperf.sock 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77788 ']' 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:51.176 [2024-05-14 23:00:02.500954] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:51.176 [2024-05-14 23:00:02.501062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77788 ] 00:12:51.176 [2024-05-14 23:00:02.639335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.176 [2024-05-14 23:00:02.710284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:12:51.176 23:00:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dqM0o0Y4Tm 00:12:51.176 [2024-05-14 23:00:03.064899] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:51.176 [2024-05-14 23:00:03.065021] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:51.176 [2024-05-14 23:00:03.069997] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:51.176 [2024-05-14 23:00:03.070578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dca40 (107): Transport endpoint is not connected 00:12:51.176 [2024-05-14 23:00:03.071566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dca40 (9): Bad file descriptor 00:12:51.176 [2024-05-14 23:00:03.072564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:51.176 [2024-05-14 23:00:03.072605] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:51.176 [2024-05-14 23:00:03.072619] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:51.176 2024/05/14 23:00:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.dqM0o0Y4Tm subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:12:51.176 request: 00:12:51.176 { 00:12:51.176 "method": "bdev_nvme_attach_controller", 00:12:51.176 "params": { 00:12:51.176 "name": "TLSTEST", 00:12:51.176 "trtype": "tcp", 00:12:51.176 "traddr": "10.0.0.2", 00:12:51.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.176 "adrfam": "ipv4", 00:12:51.176 "trsvcid": "4420", 00:12:51.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.176 "psk": "/tmp/tmp.dqM0o0Y4Tm" 00:12:51.176 } 00:12:51.176 } 00:12:51.176 Got JSON-RPC error response 00:12:51.176 GoRPCClient: error on JSON-RPC call 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 77788 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77788 ']' 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77788 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77788 00:12:51.176 killing process with pid 77788 00:12:51.176 Received shutdown signal, test time was about 10.000000 seconds 00:12:51.176 00:12:51.176 Latency(us) 00:12:51.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.176 =================================================================================================================== 00:12:51.176 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77788' 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77788 00:12:51.176 [2024-05-14 23:00:03.123746] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77788 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XF4ctfSlZ5 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:51.176 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XF4ctfSlZ5 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XF4ctfSlZ5 00:12:51.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XF4ctfSlZ5' 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77819 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77819 /var/tmp/bdevperf.sock 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77819 ']' 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.177 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:51.177 [2024-05-14 23:00:03.403401] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:51.177 [2024-05-14 23:00:03.403654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77819 ] 00:12:51.177 [2024-05-14 23:00:03.544247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.435 [2024-05-14 23:00:03.616944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.435 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:51.435 23:00:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:12:51.435 23:00:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.XF4ctfSlZ5 00:12:51.693 [2024-05-14 23:00:03.968187] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:51.693 [2024-05-14 23:00:03.968580] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:51.693 [2024-05-14 23:00:03.978563] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:51.693 [2024-05-14 23:00:03.978787] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:51.693 [2024-05-14 23:00:03.979053] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:51.693 [2024-05-14 23:00:03.979227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x843a40 (107): Transport endpoint is not connected 00:12:51.693 [2024-05-14 23:00:03.980218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x843a40 (9): Bad file descriptor 00:12:51.693 [2024-05-14 23:00:03.981214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:51.693 [2024-05-14 23:00:03.981378] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:51.694 [2024-05-14 23:00:03.981522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:51.694 2024/05/14 23:00:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.XF4ctfSlZ5 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:12:51.694 request: 00:12:51.694 { 00:12:51.694 "method": "bdev_nvme_attach_controller", 00:12:51.694 "params": { 00:12:51.694 "name": "TLSTEST", 00:12:51.694 "trtype": "tcp", 00:12:51.694 "traddr": "10.0.0.2", 00:12:51.694 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:51.694 "adrfam": "ipv4", 00:12:51.694 "trsvcid": "4420", 00:12:51.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.694 "psk": "/tmp/tmp.XF4ctfSlZ5" 00:12:51.694 } 00:12:51.694 } 00:12:51.694 Got JSON-RPC error response 00:12:51.694 GoRPCClient: error on JSON-RPC call 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 77819 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77819 ']' 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77819 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77819 00:12:51.694 killing process with pid 77819 00:12:51.694 Received shutdown signal, test time was about 10.000000 seconds 00:12:51.694 00:12:51.694 Latency(us) 00:12:51.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.694 =================================================================================================================== 00:12:51.694 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77819' 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77819 00:12:51.694 [2024-05-14 23:00:04.038603] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:51.694 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77819 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XF4ctfSlZ5 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XF4ctfSlZ5 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XF4ctfSlZ5 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XF4ctfSlZ5' 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77847 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77847 /var/tmp/bdevperf.sock 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77847 ']' 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:51.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.952 23:00:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:51.952 [2024-05-14 23:00:04.295048] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:51.952 [2024-05-14 23:00:04.295178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77847 ] 00:12:52.209 [2024-05-14 23:00:04.445448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.210 [2024-05-14 23:00:04.506064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.143 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:53.143 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:12:53.143 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XF4ctfSlZ5 00:12:53.143 [2024-05-14 23:00:05.521870] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:53.143 [2024-05-14 23:00:05.521986] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:53.143 [2024-05-14 23:00:05.528864] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:53.143 [2024-05-14 23:00:05.528907] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:53.143 [2024-05-14 23:00:05.528966] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:53.143 [2024-05-14 23:00:05.529526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154fa40 (107): Transport endpoint is not connected 00:12:53.143 [2024-05-14 23:00:05.530517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154fa40 (9): Bad file descriptor 00:12:53.143 [2024-05-14 23:00:05.531513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:53.143 [2024-05-14 23:00:05.531539] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:53.143 [2024-05-14 23:00:05.531550] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:53.401 2024/05/14 23:00:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.XF4ctfSlZ5 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:12:53.401 request: 00:12:53.401 { 00:12:53.401 "method": "bdev_nvme_attach_controller", 00:12:53.401 "params": { 00:12:53.401 "name": "TLSTEST", 00:12:53.401 "trtype": "tcp", 00:12:53.401 "traddr": "10.0.0.2", 00:12:53.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.401 "adrfam": "ipv4", 00:12:53.402 "trsvcid": "4420", 00:12:53.402 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:53.402 "psk": "/tmp/tmp.XF4ctfSlZ5" 00:12:53.402 } 00:12:53.402 } 00:12:53.402 Got JSON-RPC error response 00:12:53.402 GoRPCClient: error on JSON-RPC call 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 77847 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77847 ']' 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77847 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77847 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:53.402 killing process with pid 77847 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77847' 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77847 00:12:53.402 Received shutdown signal, test time was about 10.000000 seconds 00:12:53.402 00:12:53.402 Latency(us) 00:12:53.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.402 =================================================================================================================== 00:12:53.402 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:53.402 [2024-05-14 23:00:05.579654] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77847 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77892 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77892 /var/tmp/bdevperf.sock 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77892 ']' 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:53.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:53.402 23:00:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:53.659 [2024-05-14 23:00:05.822714] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:53.659 [2024-05-14 23:00:05.823381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77892 ] 00:12:53.659 [2024-05-14 23:00:05.960911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.659 [2024-05-14 23:00:06.039614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.917 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:53.917 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:12:53.917 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:54.176 [2024-05-14 23:00:06.399001] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:54.176 [2024-05-14 23:00:06.401128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243ba00 (9): Bad file descriptor 00:12:54.176 [2024-05-14 23:00:06.402123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:54.176 [2024-05-14 23:00:06.402153] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:54.176 [2024-05-14 23:00:06.402165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:54.176 2024/05/14 23:00:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:12:54.176 request: 00:12:54.176 { 00:12:54.176 "method": "bdev_nvme_attach_controller", 00:12:54.176 "params": { 00:12:54.176 "name": "TLSTEST", 00:12:54.176 "trtype": "tcp", 00:12:54.176 "traddr": "10.0.0.2", 00:12:54.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:54.176 "adrfam": "ipv4", 00:12:54.176 "trsvcid": "4420", 00:12:54.176 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:12:54.176 } 00:12:54.176 } 00:12:54.176 Got JSON-RPC error response 00:12:54.176 GoRPCClient: error on JSON-RPC call 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 77892 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77892 ']' 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77892 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77892 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77892' 00:12:54.176 killing process with pid 77892 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77892 00:12:54.176 Received shutdown signal, test time was about 10.000000 seconds 00:12:54.176 00:12:54.176 Latency(us) 00:12:54.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.176 =================================================================================================================== 00:12:54.176 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:54.176 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77892 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 77279 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77279 ']' 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77279 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77279 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:54.444 killing process with pid 77279 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77279' 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77279 00:12:54.444 [2024-05-14 23:00:06.666407] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:54.444 [2024-05-14 23:00:06.666450] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:54.444 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77279 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.iCkTIHcImr 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.iCkTIHcImr 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77934 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77934 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 77934 ']' 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:54.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:54.733 23:00:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:54.733 [2024-05-14 23:00:06.973819] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:54.733 [2024-05-14 23:00:06.973903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.733 [2024-05-14 23:00:07.114411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.990 [2024-05-14 23:00:07.177337] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.990 [2024-05-14 23:00:07.177392] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.990 [2024-05-14 23:00:07.177404] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.990 [2024-05-14 23:00:07.177412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.990 [2024-05-14 23:00:07.177420] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.990 [2024-05-14 23:00:07.177446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.990 23:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:54.990 23:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:12:54.990 23:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.990 23:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.990 23:00:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:54.990 23:00:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.990 23:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.iCkTIHcImr 00:12:54.990 23:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iCkTIHcImr 00:12:54.990 23:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:55.247 [2024-05-14 23:00:07.578419] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.247 23:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:55.506 23:00:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:56.072 [2024-05-14 23:00:08.190500] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:56.072 [2024-05-14 23:00:08.190613] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:56.072 [2024-05-14 23:00:08.190827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.072 23:00:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:56.330 malloc0 00:12:56.330 23:00:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:56.589 23:00:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iCkTIHcImr 00:12:56.848 [2024-05-14 23:00:09.037698] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iCkTIHcImr 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iCkTIHcImr' 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=78027 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 78027 /var/tmp/bdevperf.sock 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78027 ']' 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:56.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:56.848 23:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.848 [2024-05-14 23:00:09.102503] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:56.848 [2024-05-14 23:00:09.102596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78027 ] 00:12:56.848 [2024-05-14 23:00:09.233991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.106 [2024-05-14 23:00:09.301882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.106 23:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.106 23:00:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:12:57.106 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iCkTIHcImr 00:12:57.362 [2024-05-14 23:00:09.634866] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:57.362 [2024-05-14 23:00:09.635031] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:57.362 TLSTESTn1 00:12:57.362 23:00:09 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:57.619 Running I/O for 10 seconds... 00:13:07.595 00:13:07.595 Latency(us) 00:13:07.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.595 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:07.595 Verification LBA range: start 0x0 length 0x2000 00:13:07.595 TLSTESTn1 : 10.02 3716.01 14.52 0.00 0.00 34370.07 7298.33 34555.35 00:13:07.595 =================================================================================================================== 00:13:07.595 Total : 3716.01 14.52 0.00 0.00 34370.07 7298.33 34555.35 00:13:07.595 0 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 78027 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78027 ']' 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78027 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78027 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:07.595 killing process with pid 78027 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78027' 00:13:07.595 Received shutdown signal, test time was about 10.000000 seconds 00:13:07.595 00:13:07.595 Latency(us) 00:13:07.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.595 =================================================================================================================== 00:13:07.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78027 00:13:07.595 [2024-05-14 23:00:19.917611] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:07.595 23:00:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78027 00:13:07.853 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.iCkTIHcImr 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iCkTIHcImr 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iCkTIHcImr 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iCkTIHcImr 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iCkTIHcImr' 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=78155 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 78155 /var/tmp/bdevperf.sock 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78155 ']' 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:07.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:07.854 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:07.854 [2024-05-14 23:00:20.156487] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:07.854 [2024-05-14 23:00:20.156586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78155 ] 00:13:08.111 [2024-05-14 23:00:20.288846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.111 [2024-05-14 23:00:20.347862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.111 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:08.111 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:08.111 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iCkTIHcImr 00:13:08.369 [2024-05-14 23:00:20.687327] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:08.369 [2024-05-14 23:00:20.687406] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:08.369 [2024-05-14 23:00:20.687418] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.iCkTIHcImr 00:13:08.369 2024/05/14 23:00:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.iCkTIHcImr subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:13:08.370 request: 00:13:08.370 { 00:13:08.370 "method": "bdev_nvme_attach_controller", 00:13:08.370 "params": { 00:13:08.370 "name": "TLSTEST", 00:13:08.370 "trtype": "tcp", 00:13:08.370 "traddr": "10.0.0.2", 00:13:08.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.370 "adrfam": "ipv4", 00:13:08.370 "trsvcid": "4420", 00:13:08.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.370 "psk": "/tmp/tmp.iCkTIHcImr" 00:13:08.370 } 00:13:08.370 } 00:13:08.370 Got JSON-RPC error response 00:13:08.370 GoRPCClient: error on JSON-RPC call 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 78155 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78155 ']' 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78155 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78155 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:08.370 killing process with pid 78155 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78155' 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78155 00:13:08.370 Received shutdown signal, test time was about 10.000000 seconds 00:13:08.370 00:13:08.370 Latency(us) 00:13:08.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.370 =================================================================================================================== 00:13:08.370 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:08.370 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78155 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 77934 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 77934 ']' 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 77934 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77934 00:13:08.628 killing process with pid 77934 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77934' 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 77934 00:13:08.628 [2024-05-14 23:00:20.937201] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:08.628 [2024-05-14 23:00:20.937247] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:08.628 23:00:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 77934 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78192 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78192 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78192 ']' 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:08.886 23:00:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.886 [2024-05-14 23:00:21.205539] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:08.886 [2024-05-14 23:00:21.205641] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.145 [2024-05-14 23:00:21.342594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.145 [2024-05-14 23:00:21.402611] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.145 [2024-05-14 23:00:21.402663] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.145 [2024-05-14 23:00:21.402675] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.145 [2024-05-14 23:00:21.402683] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.145 [2024-05-14 23:00:21.402691] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.145 [2024-05-14 23:00:21.402721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.iCkTIHcImr 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.iCkTIHcImr 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.iCkTIHcImr 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iCkTIHcImr 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:10.124 [2024-05-14 23:00:22.460646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.124 23:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:10.692 23:00:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:10.692 [2024-05-14 23:00:23.056735] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:10.692 [2024-05-14 23:00:23.057064] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:10.692 [2024-05-14 23:00:23.057335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.692 23:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:11.257 malloc0 00:13:11.257 23:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:11.257 23:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iCkTIHcImr 00:13:11.515 [2024-05-14 23:00:23.880985] tcp.c:3572:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:11.515 [2024-05-14 23:00:23.881422] tcp.c:3658:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:11.515 [2024-05-14 23:00:23.881483] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:11.515 2024/05/14 23:00:23 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.iCkTIHcImr], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:13:11.515 request: 00:13:11.515 { 00:13:11.515 "method": "nvmf_subsystem_add_host", 00:13:11.515 "params": { 00:13:11.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.515 "host": "nqn.2016-06.io.spdk:host1", 00:13:11.515 "psk": "/tmp/tmp.iCkTIHcImr" 00:13:11.515 } 00:13:11.515 } 00:13:11.515 Got JSON-RPC error response 00:13:11.515 GoRPCClient: error on JSON-RPC call 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 78192 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78192 ']' 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78192 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78192 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:11.774 killing process with pid 78192 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78192' 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78192 00:13:11.774 [2024-05-14 23:00:23.932254] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:11.774 23:00:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78192 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.iCkTIHcImr 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78308 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78308 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78308 ']' 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:11.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:11.774 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:12.032 [2024-05-14 23:00:24.182076] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:12.033 [2024-05-14 23:00:24.182169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.033 [2024-05-14 23:00:24.316947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.033 [2024-05-14 23:00:24.388272] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.033 [2024-05-14 23:00:24.388330] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.033 [2024-05-14 23:00:24.388342] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.033 [2024-05-14 23:00:24.388351] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.033 [2024-05-14 23:00:24.388358] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.033 [2024-05-14 23:00:24.388381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.290 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:12.290 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:12.290 23:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.290 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.290 23:00:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:12.290 23:00:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.290 23:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.iCkTIHcImr 00:13:12.290 23:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iCkTIHcImr 00:13:12.290 23:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:12.549 [2024-05-14 23:00:24.795235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.549 23:00:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:12.812 23:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:13.071 [2024-05-14 23:00:25.395299] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:13.071 [2024-05-14 23:00:25.395398] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:13.071 [2024-05-14 23:00:25.395590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.071 23:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:13.330 malloc0 00:13:13.330 23:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:13.587 23:00:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iCkTIHcImr 00:13:13.948 [2024-05-14 23:00:26.206751] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:13.948 23:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:13.948 23:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=78398 00:13:13.948 23:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:13.948 23:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 78398 /var/tmp/bdevperf.sock 00:13:13.948 23:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78398 ']' 00:13:13.948 23:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:13.948 23:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:13.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:13.948 23:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:13.948 23:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:13.948 23:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:13.948 [2024-05-14 23:00:26.273598] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:13.948 [2024-05-14 23:00:26.273689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78398 ] 00:13:14.204 [2024-05-14 23:00:26.405862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.204 [2024-05-14 23:00:26.467775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.204 23:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:14.205 23:00:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:14.205 23:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iCkTIHcImr 00:13:14.462 [2024-05-14 23:00:26.803602] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:14.462 [2024-05-14 23:00:26.803717] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:14.721 TLSTESTn1 00:13:14.721 23:00:26 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:14.981 23:00:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:14.981 "subsystems": [ 00:13:14.981 { 00:13:14.981 "subsystem": "keyring", 00:13:14.981 "config": [] 00:13:14.981 }, 00:13:14.981 { 00:13:14.981 "subsystem": "iobuf", 00:13:14.981 "config": [ 00:13:14.981 { 00:13:14.981 "method": "iobuf_set_options", 00:13:14.981 "params": { 00:13:14.981 "large_bufsize": 135168, 00:13:14.981 "large_pool_count": 1024, 00:13:14.981 "small_bufsize": 8192, 00:13:14.981 "small_pool_count": 8192 00:13:14.981 } 00:13:14.981 } 00:13:14.981 ] 00:13:14.981 }, 00:13:14.981 { 00:13:14.981 "subsystem": "sock", 00:13:14.981 "config": [ 00:13:14.981 { 00:13:14.981 "method": "sock_impl_set_options", 00:13:14.981 "params": { 00:13:14.981 "enable_ktls": false, 00:13:14.981 "enable_placement_id": 0, 00:13:14.981 "enable_quickack": false, 00:13:14.981 "enable_recv_pipe": true, 00:13:14.981 "enable_zerocopy_send_client": false, 00:13:14.981 "enable_zerocopy_send_server": true, 00:13:14.981 "impl_name": "posix", 00:13:14.981 "recv_buf_size": 2097152, 00:13:14.981 "send_buf_size": 2097152, 00:13:14.981 "tls_version": 0, 00:13:14.981 "zerocopy_threshold": 0 00:13:14.981 } 00:13:14.981 }, 00:13:14.981 { 00:13:14.981 "method": "sock_impl_set_options", 00:13:14.981 "params": { 00:13:14.981 "enable_ktls": false, 00:13:14.981 "enable_placement_id": 0, 00:13:14.981 "enable_quickack": false, 00:13:14.981 "enable_recv_pipe": true, 00:13:14.981 "enable_zerocopy_send_client": false, 00:13:14.981 "enable_zerocopy_send_server": true, 00:13:14.981 "impl_name": "ssl", 00:13:14.981 "recv_buf_size": 4096, 00:13:14.981 "send_buf_size": 4096, 00:13:14.981 "tls_version": 0, 00:13:14.981 "zerocopy_threshold": 0 00:13:14.981 } 00:13:14.981 } 00:13:14.981 ] 00:13:14.981 }, 00:13:14.981 { 00:13:14.981 "subsystem": "vmd", 00:13:14.981 "config": [] 00:13:14.981 }, 00:13:14.981 { 00:13:14.981 "subsystem": "accel", 00:13:14.981 "config": [ 00:13:14.981 { 00:13:14.981 "method": "accel_set_options", 00:13:14.981 "params": { 00:13:14.981 "buf_count": 2048, 00:13:14.981 "large_cache_size": 16, 00:13:14.981 "sequence_count": 2048, 00:13:14.981 "small_cache_size": 128, 00:13:14.981 "task_count": 2048 00:13:14.981 } 00:13:14.981 } 00:13:14.981 ] 00:13:14.981 }, 00:13:14.981 { 00:13:14.981 "subsystem": "bdev", 00:13:14.981 "config": [ 00:13:14.981 { 00:13:14.981 "method": "bdev_set_options", 00:13:14.981 "params": { 00:13:14.981 "bdev_auto_examine": true, 00:13:14.981 "bdev_io_cache_size": 256, 00:13:14.981 "bdev_io_pool_size": 65535, 00:13:14.981 "iobuf_large_cache_size": 16, 00:13:14.981 "iobuf_small_cache_size": 128 00:13:14.981 } 00:13:14.981 }, 00:13:14.981 { 00:13:14.981 "method": "bdev_raid_set_options", 00:13:14.981 "params": { 00:13:14.981 "process_window_size_kb": 1024 00:13:14.981 } 00:13:14.981 }, 00:13:14.981 { 00:13:14.981 "method": "bdev_iscsi_set_options", 00:13:14.981 "params": { 00:13:14.981 "timeout_sec": 30 00:13:14.981 } 00:13:14.981 }, 00:13:14.981 { 00:13:14.981 "method": "bdev_nvme_set_options", 00:13:14.981 "params": { 00:13:14.981 "action_on_timeout": "none", 00:13:14.981 "allow_accel_sequence": false, 00:13:14.981 "arbitration_burst": 0, 00:13:14.981 "bdev_retry_count": 3, 00:13:14.981 "ctrlr_loss_timeout_sec": 0, 00:13:14.981 "delay_cmd_submit": true, 00:13:14.981 "dhchap_dhgroups": [ 00:13:14.981 "null", 00:13:14.981 "ffdhe2048", 00:13:14.981 "ffdhe3072", 00:13:14.981 "ffdhe4096", 00:13:14.981 "ffdhe6144", 00:13:14.981 "ffdhe8192" 00:13:14.981 ], 00:13:14.981 "dhchap_digests": [ 00:13:14.981 "sha256", 00:13:14.981 "sha384", 00:13:14.981 "sha512" 00:13:14.981 ], 00:13:14.981 "disable_auto_failback": false, 00:13:14.981 "fast_io_fail_timeout_sec": 0, 00:13:14.981 "generate_uuids": false, 00:13:14.981 "high_priority_weight": 0, 00:13:14.982 "io_path_stat": false, 00:13:14.982 "io_queue_requests": 0, 00:13:14.982 "keep_alive_timeout_ms": 10000, 00:13:14.982 "low_priority_weight": 0, 00:13:14.982 "medium_priority_weight": 0, 00:13:14.982 "nvme_adminq_poll_period_us": 10000, 00:13:14.982 "nvme_error_stat": false, 00:13:14.982 "nvme_ioq_poll_period_us": 0, 00:13:14.982 "rdma_cm_event_timeout_ms": 0, 00:13:14.982 "rdma_max_cq_size": 0, 00:13:14.982 "rdma_srq_size": 0, 00:13:14.982 "reconnect_delay_sec": 0, 00:13:14.982 "timeout_admin_us": 0, 00:13:14.982 "timeout_us": 0, 00:13:14.982 "transport_ack_timeout": 0, 00:13:14.982 "transport_retry_count": 4, 00:13:14.982 "transport_tos": 0 00:13:14.982 } 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "method": "bdev_nvme_set_hotplug", 00:13:14.982 "params": { 00:13:14.982 "enable": false, 00:13:14.982 "period_us": 100000 00:13:14.982 } 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "method": "bdev_malloc_create", 00:13:14.982 "params": { 00:13:14.982 "block_size": 4096, 00:13:14.982 "name": "malloc0", 00:13:14.982 "num_blocks": 8192, 00:13:14.982 "optimal_io_boundary": 0, 00:13:14.982 "physical_block_size": 4096, 00:13:14.982 "uuid": "bdce47de-bb78-40a0-a9b9-1ba7b1ca6389" 00:13:14.982 } 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "method": "bdev_wait_for_examine" 00:13:14.982 } 00:13:14.982 ] 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "subsystem": "nbd", 00:13:14.982 "config": [] 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "subsystem": "scheduler", 00:13:14.982 "config": [ 00:13:14.982 { 00:13:14.982 "method": "framework_set_scheduler", 00:13:14.982 "params": { 00:13:14.982 "name": "static" 00:13:14.982 } 00:13:14.982 } 00:13:14.982 ] 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "subsystem": "nvmf", 00:13:14.982 "config": [ 00:13:14.982 { 00:13:14.982 "method": "nvmf_set_config", 00:13:14.982 "params": { 00:13:14.982 "admin_cmd_passthru": { 00:13:14.982 "identify_ctrlr": false 00:13:14.982 }, 00:13:14.982 "discovery_filter": "match_any" 00:13:14.982 } 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "method": "nvmf_set_max_subsystems", 00:13:14.982 "params": { 00:13:14.982 "max_subsystems": 1024 00:13:14.982 } 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "method": "nvmf_set_crdt", 00:13:14.982 "params": { 00:13:14.982 "crdt1": 0, 00:13:14.982 "crdt2": 0, 00:13:14.982 "crdt3": 0 00:13:14.982 } 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "method": "nvmf_create_transport", 00:13:14.982 "params": { 00:13:14.982 "abort_timeout_sec": 1, 00:13:14.982 "ack_timeout": 0, 00:13:14.982 "buf_cache_size": 4294967295, 00:13:14.982 "c2h_success": false, 00:13:14.982 "data_wr_pool_size": 0, 00:13:14.982 "dif_insert_or_strip": false, 00:13:14.982 "in_capsule_data_size": 4096, 00:13:14.982 "io_unit_size": 131072, 00:13:14.982 "max_aq_depth": 128, 00:13:14.982 "max_io_qpairs_per_ctrlr": 127, 00:13:14.982 "max_io_size": 131072, 00:13:14.982 "max_queue_depth": 128, 00:13:14.982 "num_shared_buffers": 511, 00:13:14.982 "sock_priority": 0, 00:13:14.982 "trtype": "TCP", 00:13:14.982 "zcopy": false 00:13:14.982 } 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "method": "nvmf_create_subsystem", 00:13:14.982 "params": { 00:13:14.982 "allow_any_host": false, 00:13:14.982 "ana_reporting": false, 00:13:14.982 "max_cntlid": 65519, 00:13:14.982 "max_namespaces": 10, 00:13:14.982 "min_cntlid": 1, 00:13:14.982 "model_number": "SPDK bdev Controller", 00:13:14.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:14.982 "serial_number": "SPDK00000000000001" 00:13:14.982 } 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "method": "nvmf_subsystem_add_host", 00:13:14.982 "params": { 00:13:14.982 "host": "nqn.2016-06.io.spdk:host1", 00:13:14.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:14.982 "psk": "/tmp/tmp.iCkTIHcImr" 00:13:14.982 } 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "method": "nvmf_subsystem_add_ns", 00:13:14.982 "params": { 00:13:14.982 "namespace": { 00:13:14.982 "bdev_name": "malloc0", 00:13:14.982 "nguid": "BDCE47DEBB7840A0A9B91BA7B1CA6389", 00:13:14.982 "no_auto_visible": false, 00:13:14.982 "nsid": 1, 00:13:14.982 "uuid": "bdce47de-bb78-40a0-a9b9-1ba7b1ca6389" 00:13:14.982 }, 00:13:14.982 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:13:14.982 } 00:13:14.982 }, 00:13:14.982 { 00:13:14.982 "method": "nvmf_subsystem_add_listener", 00:13:14.982 "params": { 00:13:14.982 "listen_address": { 00:13:14.982 "adrfam": "IPv4", 00:13:14.982 "traddr": "10.0.0.2", 00:13:14.982 "trsvcid": "4420", 00:13:14.982 "trtype": "TCP" 00:13:14.982 }, 00:13:14.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:14.982 "secure_channel": true 00:13:14.982 } 00:13:14.982 } 00:13:14.982 ] 00:13:14.982 } 00:13:14.982 ] 00:13:14.982 }' 00:13:14.982 23:00:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:15.242 "subsystems": [ 00:13:15.242 { 00:13:15.242 "subsystem": "keyring", 00:13:15.242 "config": [] 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "subsystem": "iobuf", 00:13:15.242 "config": [ 00:13:15.242 { 00:13:15.242 "method": "iobuf_set_options", 00:13:15.242 "params": { 00:13:15.242 "large_bufsize": 135168, 00:13:15.242 "large_pool_count": 1024, 00:13:15.242 "small_bufsize": 8192, 00:13:15.242 "small_pool_count": 8192 00:13:15.242 } 00:13:15.242 } 00:13:15.242 ] 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "subsystem": "sock", 00:13:15.242 "config": [ 00:13:15.242 { 00:13:15.242 "method": "sock_impl_set_options", 00:13:15.242 "params": { 00:13:15.242 "enable_ktls": false, 00:13:15.242 "enable_placement_id": 0, 00:13:15.242 "enable_quickack": false, 00:13:15.242 "enable_recv_pipe": true, 00:13:15.242 "enable_zerocopy_send_client": false, 00:13:15.242 "enable_zerocopy_send_server": true, 00:13:15.242 "impl_name": "posix", 00:13:15.242 "recv_buf_size": 2097152, 00:13:15.242 "send_buf_size": 2097152, 00:13:15.242 "tls_version": 0, 00:13:15.242 "zerocopy_threshold": 0 00:13:15.242 } 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "method": "sock_impl_set_options", 00:13:15.242 "params": { 00:13:15.242 "enable_ktls": false, 00:13:15.242 "enable_placement_id": 0, 00:13:15.242 "enable_quickack": false, 00:13:15.242 "enable_recv_pipe": true, 00:13:15.242 "enable_zerocopy_send_client": false, 00:13:15.242 "enable_zerocopy_send_server": true, 00:13:15.242 "impl_name": "ssl", 00:13:15.242 "recv_buf_size": 4096, 00:13:15.242 "send_buf_size": 4096, 00:13:15.242 "tls_version": 0, 00:13:15.242 "zerocopy_threshold": 0 00:13:15.242 } 00:13:15.242 } 00:13:15.242 ] 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "subsystem": "vmd", 00:13:15.242 "config": [] 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "subsystem": "accel", 00:13:15.242 "config": [ 00:13:15.242 { 00:13:15.242 "method": "accel_set_options", 00:13:15.242 "params": { 00:13:15.242 "buf_count": 2048, 00:13:15.242 "large_cache_size": 16, 00:13:15.242 "sequence_count": 2048, 00:13:15.242 "small_cache_size": 128, 00:13:15.242 "task_count": 2048 00:13:15.242 } 00:13:15.242 } 00:13:15.242 ] 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "subsystem": "bdev", 00:13:15.242 "config": [ 00:13:15.242 { 00:13:15.242 "method": "bdev_set_options", 00:13:15.242 "params": { 00:13:15.242 "bdev_auto_examine": true, 00:13:15.242 "bdev_io_cache_size": 256, 00:13:15.242 "bdev_io_pool_size": 65535, 00:13:15.242 "iobuf_large_cache_size": 16, 00:13:15.242 "iobuf_small_cache_size": 128 00:13:15.242 } 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "method": "bdev_raid_set_options", 00:13:15.242 "params": { 00:13:15.242 "process_window_size_kb": 1024 00:13:15.242 } 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "method": "bdev_iscsi_set_options", 00:13:15.242 "params": { 00:13:15.242 "timeout_sec": 30 00:13:15.242 } 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "method": "bdev_nvme_set_options", 00:13:15.242 "params": { 00:13:15.242 "action_on_timeout": "none", 00:13:15.242 "allow_accel_sequence": false, 00:13:15.242 "arbitration_burst": 0, 00:13:15.242 "bdev_retry_count": 3, 00:13:15.242 "ctrlr_loss_timeout_sec": 0, 00:13:15.242 "delay_cmd_submit": true, 00:13:15.242 "dhchap_dhgroups": [ 00:13:15.242 "null", 00:13:15.242 "ffdhe2048", 00:13:15.242 "ffdhe3072", 00:13:15.242 "ffdhe4096", 00:13:15.242 "ffdhe6144", 00:13:15.242 "ffdhe8192" 00:13:15.242 ], 00:13:15.242 "dhchap_digests": [ 00:13:15.242 "sha256", 00:13:15.242 "sha384", 00:13:15.242 "sha512" 00:13:15.242 ], 00:13:15.242 "disable_auto_failback": false, 00:13:15.242 "fast_io_fail_timeout_sec": 0, 00:13:15.242 "generate_uuids": false, 00:13:15.242 "high_priority_weight": 0, 00:13:15.242 "io_path_stat": false, 00:13:15.242 "io_queue_requests": 512, 00:13:15.242 "keep_alive_timeout_ms": 10000, 00:13:15.242 "low_priority_weight": 0, 00:13:15.242 "medium_priority_weight": 0, 00:13:15.242 "nvme_adminq_poll_period_us": 10000, 00:13:15.242 "nvme_error_stat": false, 00:13:15.242 "nvme_ioq_poll_period_us": 0, 00:13:15.242 "rdma_cm_event_timeout_ms": 0, 00:13:15.242 "rdma_max_cq_size": 0, 00:13:15.242 "rdma_srq_size": 0, 00:13:15.242 "reconnect_delay_sec": 0, 00:13:15.242 "timeout_admin_us": 0, 00:13:15.242 "timeout_us": 0, 00:13:15.242 "transport_ack_timeout": 0, 00:13:15.242 "transport_retry_count": 4, 00:13:15.242 "transport_tos": 0 00:13:15.242 } 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "method": "bdev_nvme_attach_controller", 00:13:15.242 "params": { 00:13:15.242 "adrfam": "IPv4", 00:13:15.242 "ctrlr_loss_timeout_sec": 0, 00:13:15.242 "ddgst": false, 00:13:15.242 "fast_io_fail_timeout_sec": 0, 00:13:15.242 "hdgst": false, 00:13:15.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:15.242 "name": "TLSTEST", 00:13:15.242 "prchk_guard": false, 00:13:15.242 "prchk_reftag": false, 00:13:15.242 "psk": "/tmp/tmp.iCkTIHcImr", 00:13:15.242 "reconnect_delay_sec": 0, 00:13:15.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.242 "traddr": "10.0.0.2", 00:13:15.242 "trsvcid": "4420", 00:13:15.242 "trtype": "TCP" 00:13:15.242 } 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "method": "bdev_nvme_set_hotplug", 00:13:15.242 "params": { 00:13:15.242 "enable": false, 00:13:15.242 "period_us": 100000 00:13:15.242 } 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "method": "bdev_wait_for_examine" 00:13:15.242 } 00:13:15.242 ] 00:13:15.242 }, 00:13:15.242 { 00:13:15.242 "subsystem": "nbd", 00:13:15.242 "config": [] 00:13:15.242 } 00:13:15.242 ] 00:13:15.242 }' 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 78398 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78398 ']' 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78398 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78398 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:15.242 killing process with pid 78398 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78398' 00:13:15.242 Received shutdown signal, test time was about 10.000000 seconds 00:13:15.242 00:13:15.242 Latency(us) 00:13:15.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.242 =================================================================================================================== 00:13:15.242 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:15.242 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78398 00:13:15.500 [2024-05-14 23:00:27.632093] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78398 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 78308 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78308 ']' 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78308 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78308 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:15.500 killing process with pid 78308 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78308' 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78308 00:13:15.500 [2024-05-14 23:00:27.850774] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:15.500 [2024-05-14 23:00:27.850816] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:15.500 23:00:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78308 00:13:15.757 23:00:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:15.757 23:00:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.757 23:00:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:15.757 "subsystems": [ 00:13:15.757 { 00:13:15.757 "subsystem": "keyring", 00:13:15.757 "config": [] 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "subsystem": "iobuf", 00:13:15.757 "config": [ 00:13:15.757 { 00:13:15.757 "method": "iobuf_set_options", 00:13:15.757 "params": { 00:13:15.757 "large_bufsize": 135168, 00:13:15.757 "large_pool_count": 1024, 00:13:15.757 "small_bufsize": 8192, 00:13:15.757 "small_pool_count": 8192 00:13:15.757 } 00:13:15.757 } 00:13:15.757 ] 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "subsystem": "sock", 00:13:15.757 "config": [ 00:13:15.757 { 00:13:15.757 "method": "sock_impl_set_options", 00:13:15.757 "params": { 00:13:15.757 "enable_ktls": false, 00:13:15.757 "enable_placement_id": 0, 00:13:15.757 "enable_quickack": false, 00:13:15.757 "enable_recv_pipe": true, 00:13:15.757 "enable_zerocopy_send_client": false, 00:13:15.757 "enable_zerocopy_send_server": true, 00:13:15.757 "impl_name": "posix", 00:13:15.757 "recv_buf_size": 2097152, 00:13:15.757 "send_buf_size": 2097152, 00:13:15.757 "tls_version": 0, 00:13:15.757 "zerocopy_threshold": 0 00:13:15.757 } 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "method": "sock_impl_set_options", 00:13:15.757 "params": { 00:13:15.757 "enable_ktls": false, 00:13:15.757 "enable_placement_id": 0, 00:13:15.757 "enable_quickack": false, 00:13:15.757 "enable_recv_pipe": true, 00:13:15.757 "enable_zerocopy_send_client": false, 00:13:15.757 "enable_zerocopy_send_server": true, 00:13:15.757 "impl_name": "ssl", 00:13:15.757 "recv_buf_size": 4096, 00:13:15.757 "send_buf_size": 4096, 00:13:15.757 "tls_version": 0, 00:13:15.757 "zerocopy_threshold": 0 00:13:15.757 } 00:13:15.757 } 00:13:15.757 ] 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "subsystem": "vmd", 00:13:15.757 "config": [] 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "subsystem": "accel", 00:13:15.757 "config": [ 00:13:15.757 { 00:13:15.757 "method": "accel_set_options", 00:13:15.757 "params": { 00:13:15.757 "buf_count": 2048, 00:13:15.757 "large_cache_size": 16, 00:13:15.757 "sequence_count": 2048, 00:13:15.757 "small_cache_size": 128, 00:13:15.757 "task_count": 2048 00:13:15.757 } 00:13:15.757 } 00:13:15.757 ] 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "subsystem": "bdev", 00:13:15.757 "config": [ 00:13:15.757 { 00:13:15.757 "method": "bdev_set_options", 00:13:15.757 "params": { 00:13:15.757 "bdev_auto_examine": true, 00:13:15.757 "bdev_io_cache_size": 256, 00:13:15.757 "bdev_io_pool_size": 65535, 00:13:15.757 "iobuf_large_cache_size": 16, 00:13:15.757 "iobuf_small_cache_size": 128 00:13:15.757 } 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "method": "bdev_raid_set_options", 00:13:15.757 "params": { 00:13:15.757 "process_window_size_kb": 1024 00:13:15.757 } 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "method": "bdev_iscsi_set_options", 00:13:15.757 "params": { 00:13:15.757 "timeout_sec": 30 00:13:15.757 } 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "method": "bdev_nvme_set_options", 00:13:15.757 "params": { 00:13:15.757 "action_on_timeout": "none", 00:13:15.757 "allow_accel_sequence": false, 00:13:15.757 "arbitration_burst": 0, 00:13:15.757 "bdev_retry_count": 3, 00:13:15.757 "ctrlr_loss_timeout_sec": 0, 00:13:15.757 "delay_cmd_submit": true, 00:13:15.757 "dhchap_dhgroups": [ 00:13:15.757 "null", 00:13:15.757 "ffdhe2048", 00:13:15.757 "ffdhe3072", 00:13:15.757 "ffdhe4096", 00:13:15.757 "ffdhe6144", 00:13:15.757 "ffdhe8192" 00:13:15.757 ], 00:13:15.757 "dhchap_digests": [ 00:13:15.757 "sha256", 00:13:15.757 "sha384", 00:13:15.757 "sha512" 00:13:15.757 ], 00:13:15.757 "disable_auto_failback": false, 00:13:15.757 "fast_io_fail_timeout_sec": 0, 00:13:15.757 "generate_uuids": false, 00:13:15.757 "high_priority_weight": 0, 00:13:15.757 "io_path_stat": false, 00:13:15.757 "io_queue_requests": 0, 00:13:15.757 "keep_alive_timeout_ms": 10000, 00:13:15.757 "low_priority_weight": 0, 00:13:15.757 "medium_priority_weight": 0, 00:13:15.757 "nvme_adminq_poll_period_us": 10000, 00:13:15.757 "nvme_error_stat": false, 00:13:15.757 "nvme_ioq_poll_period_us": 0, 00:13:15.757 "rdma_cm_event_timeout_ms": 0, 00:13:15.757 "rdma_max_cq_size": 0, 00:13:15.757 "rdma_srq_size": 0, 00:13:15.757 "reconnect_delay_sec": 0, 00:13:15.757 "timeout_admin_us": 0, 00:13:15.757 "timeout_us": 0, 00:13:15.757 "transport_ack_timeout": 0, 00:13:15.757 "transport_retry_count": 4, 00:13:15.757 "transport_tos": 0 00:13:15.757 } 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "method": "bdev_nvme_set_hotplug", 00:13:15.757 23:00:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:15.757 "params": { 00:13:15.757 "enable": false, 00:13:15.757 "period_us": 100000 00:13:15.757 } 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "method": "bdev_malloc_create", 00:13:15.757 "params": { 00:13:15.757 "block_size": 4096, 00:13:15.757 "name": "malloc0", 00:13:15.757 "num_blocks": 8192, 00:13:15.757 "optimal_io_boundary": 0, 00:13:15.757 "physical_block_size": 4096, 00:13:15.757 "uuid": "bdce47de-bb78-40a0-a9b9-1ba7b1ca6389" 00:13:15.757 } 00:13:15.757 }, 00:13:15.757 { 00:13:15.757 "method": "bdev_wait_for_examine" 00:13:15.757 } 00:13:15.757 ] 00:13:15.757 }, 00:13:15.758 { 00:13:15.758 "subsystem": "nbd", 00:13:15.758 "config": [] 00:13:15.758 }, 00:13:15.758 { 00:13:15.758 "subsystem": "scheduler", 00:13:15.758 "config": [ 00:13:15.758 { 00:13:15.758 "method": "framework_set_scheduler", 00:13:15.758 "params": { 00:13:15.758 "name": "static" 00:13:15.758 } 00:13:15.758 } 00:13:15.758 ] 00:13:15.758 }, 00:13:15.758 { 00:13:15.758 "subsystem": "nvmf", 00:13:15.758 "config": [ 00:13:15.758 { 00:13:15.758 "method": "nvmf_set_config", 00:13:15.758 "params": { 00:13:15.758 "admin_cmd_passthru": { 00:13:15.758 "identify_ctrlr": false 00:13:15.758 }, 00:13:15.758 "discovery_filter": "match_any" 00:13:15.758 } 00:13:15.758 }, 00:13:15.758 { 00:13:15.758 "method": "nvmf_set_max_subsystems", 00:13:15.758 "params": { 00:13:15.758 "max_subsystems": 1024 00:13:15.758 } 00:13:15.758 }, 00:13:15.758 { 00:13:15.758 "method": "nvmf_set_crdt", 00:13:15.758 "params": { 00:13:15.758 "crdt1": 0, 00:13:15.758 "crdt2": 0, 00:13:15.758 "crdt3": 0 00:13:15.758 } 00:13:15.758 }, 00:13:15.758 { 00:13:15.758 "method": "nvmf_create_transport", 00:13:15.758 "params": { 00:13:15.758 "abort_timeout_sec": 1, 00:13:15.758 "ack_timeout": 0, 00:13:15.758 "buf_cache_size": 4294967295, 00:13:15.758 "c2h_success": false, 00:13:15.758 "data_wr_pool_size": 0, 00:13:15.758 "dif_insert_or_strip": false, 00:13:15.758 "in_capsule_data_size": 4096, 00:13:15.758 "io_unit_size": 131072, 00:13:15.758 "max_aq_depth": 128, 00:13:15.758 "max_io_qpairs_per_ctrlr": 127, 00:13:15.758 "max_io_size": 131072, 00:13:15.758 "max_queue_depth": 128, 00:13:15.758 "num_shared_buffers": 511, 00:13:15.758 "sock_priority": 0, 00:13:15.758 "trtype": "TCP", 00:13:15.758 "zcopy": false 00:13:15.758 } 00:13:15.758 }, 00:13:15.758 { 00:13:15.758 "method": "nvmf_create_subsystem", 00:13:15.758 "params": { 00:13:15.758 "allow_any_host": false, 00:13:15.758 "ana_reporting": false, 00:13:15.758 "max_cntlid": 65519, 00:13:15.758 "max_namespaces": 10, 00:13:15.758 "min_cntlid": 1, 00:13:15.758 "model_number": "SPDK bdev Controller", 00:13:15.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.758 "serial_number": "SPDK00000000000001" 00:13:15.758 } 00:13:15.758 }, 00:13:15.758 { 00:13:15.758 "method": "nvmf_subsystem_add_host", 00:13:15.758 "params": { 00:13:15.758 "host": "nqn.2016-06.io.spdk:host1", 00:13:15.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.758 "psk": "/tmp/tmp.iCkTIHcImr" 00:13:15.758 } 00:13:15.758 }, 00:13:15.758 { 00:13:15.758 "method": "nvmf_subsystem_add_ns", 00:13:15.758 "params": { 00:13:15.758 "namespace": { 00:13:15.758 "bdev_name": "malloc0", 00:13:15.758 "nguid": "BDCE47DEBB7840A0A9B91BA7B1CA6389", 00:13:15.758 "no_auto_visible": false, 00:13:15.758 "nsid": 1, 00:13:15.758 "uuid": "bdce47de-bb78-40a0-a9b9-1ba7b1ca6389" 00:13:15.758 }, 00:13:15.758 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:13:15.758 } 00:13:15.758 }, 00:13:15.758 { 00:13:15.758 "method": "nvmf_subsystem_add_listener", 00:13:15.758 "params": { 00:13:15.758 "listen_address": { 00:13:15.758 "adrfam": "IPv4", 00:13:15.758 "traddr": "10.0.0.2", 00:13:15.758 "trsvcid": "4420", 00:13:15.758 "trtype": "TCP" 00:13:15.758 }, 00:13:15.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.758 "secure_channel": true 00:13:15.758 } 00:13:15.758 } 00:13:15.758 ] 00:13:15.758 } 00:13:15.758 ] 00:13:15.758 }' 00:13:15.758 23:00:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.758 23:00:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78458 00:13:15.758 23:00:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:15.758 23:00:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78458 00:13:15.758 23:00:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78458 ']' 00:13:15.758 23:00:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.758 23:00:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:15.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.758 23:00:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.758 23:00:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:15.758 23:00:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.758 [2024-05-14 23:00:28.109741] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:15.758 [2024-05-14 23:00:28.109858] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.015 [2024-05-14 23:00:28.242713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.015 [2024-05-14 23:00:28.302524] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.015 [2024-05-14 23:00:28.302571] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.015 [2024-05-14 23:00:28.302583] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.015 [2024-05-14 23:00:28.302591] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.015 [2024-05-14 23:00:28.302598] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.015 [2024-05-14 23:00:28.302677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.272 [2024-05-14 23:00:28.480595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.272 [2024-05-14 23:00:28.496515] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:16.272 [2024-05-14 23:00:28.512467] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:16.272 [2024-05-14 23:00:28.512562] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:16.272 [2024-05-14 23:00:28.512747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=78502 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 78502 /var/tmp/bdevperf.sock 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78502 ']' 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:16.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:16.838 23:00:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:16.838 "subsystems": [ 00:13:16.838 { 00:13:16.838 "subsystem": "keyring", 00:13:16.838 "config": [] 00:13:16.838 }, 00:13:16.838 { 00:13:16.838 "subsystem": "iobuf", 00:13:16.838 "config": [ 00:13:16.838 { 00:13:16.838 "method": "iobuf_set_options", 00:13:16.838 "params": { 00:13:16.838 "large_bufsize": 135168, 00:13:16.838 "large_pool_count": 1024, 00:13:16.838 "small_bufsize": 8192, 00:13:16.838 "small_pool_count": 8192 00:13:16.838 } 00:13:16.838 } 00:13:16.838 ] 00:13:16.838 }, 00:13:16.838 { 00:13:16.838 "subsystem": "sock", 00:13:16.838 "config": [ 00:13:16.838 { 00:13:16.838 "method": "sock_impl_set_options", 00:13:16.838 "params": { 00:13:16.838 "enable_ktls": false, 00:13:16.838 "enable_placement_id": 0, 00:13:16.838 "enable_quickack": false, 00:13:16.838 "enable_recv_pipe": true, 00:13:16.838 "enable_zerocopy_send_client": false, 00:13:16.838 "enable_zerocopy_send_server": true, 00:13:16.838 "impl_name": "posix", 00:13:16.838 "recv_buf_size": 2097152, 00:13:16.838 "send_buf_size": 2097152, 00:13:16.838 "tls_version": 0, 00:13:16.838 "zerocopy_threshold": 0 00:13:16.838 } 00:13:16.838 }, 00:13:16.838 { 00:13:16.838 "method": "sock_impl_set_options", 00:13:16.838 "params": { 00:13:16.838 "enable_ktls": false, 00:13:16.838 "enable_placement_id": 0, 00:13:16.838 "enable_quickack": false, 00:13:16.838 "enable_recv_pipe": true, 00:13:16.838 "enable_zerocopy_send_client": false, 00:13:16.838 "enable_zerocopy_send_server": true, 00:13:16.838 "impl_name": "ssl", 00:13:16.838 "recv_buf_size": 4096, 00:13:16.838 "send_buf_size": 4096, 00:13:16.838 "tls_version": 0, 00:13:16.838 "zerocopy_threshold": 0 00:13:16.838 } 00:13:16.838 } 00:13:16.838 ] 00:13:16.838 }, 00:13:16.838 { 00:13:16.838 "subsystem": "vmd", 00:13:16.838 "config": [] 00:13:16.838 }, 00:13:16.838 { 00:13:16.838 "subsystem": "accel", 00:13:16.838 "config": [ 00:13:16.838 { 00:13:16.838 "method": "accel_set_options", 00:13:16.838 "params": { 00:13:16.838 "buf_count": 2048, 00:13:16.838 "large_cache_size": 16, 00:13:16.838 "sequence_count": 2048, 00:13:16.838 "small_cache_size": 128, 00:13:16.838 "task_count": 2048 00:13:16.838 } 00:13:16.838 } 00:13:16.838 ] 00:13:16.838 }, 00:13:16.838 { 00:13:16.838 "subsystem": "bdev", 00:13:16.838 "config": [ 00:13:16.838 { 00:13:16.838 "method": "bdev_set_options", 00:13:16.838 "params": { 00:13:16.838 "bdev_auto_examine": true, 00:13:16.838 "bdev_io_cache_size": 256, 00:13:16.838 "bdev_io_pool_size": 65535, 00:13:16.838 "iobuf_large_cache_size": 16, 00:13:16.838 "iobuf_small_cache_size": 128 00:13:16.838 } 00:13:16.838 }, 00:13:16.838 { 00:13:16.838 "method": "bdev_raid_set_options", 00:13:16.838 "params": { 00:13:16.838 "process_window_size_kb": 1024 00:13:16.838 } 00:13:16.838 }, 00:13:16.838 { 00:13:16.838 "method": "bdev_iscsi_set_options", 00:13:16.838 "params": { 00:13:16.838 "timeout_sec": 30 00:13:16.838 } 00:13:16.838 }, 00:13:16.838 { 00:13:16.838 "method": "bdev_nvme_set_options", 00:13:16.838 "params": { 00:13:16.838 "action_on_timeout": "none", 00:13:16.838 "allow_accel_sequence": false, 00:13:16.838 "arbitration_burst": 0, 00:13:16.839 "bdev_retry_count": 3, 00:13:16.839 "ctrlr_loss_timeout_sec": 0, 00:13:16.839 "delay_cmd_submit": true, 00:13:16.839 "dhchap_dhgroups": [ 00:13:16.839 "null", 00:13:16.839 "ffdhe2048", 00:13:16.839 "ffdhe3072", 00:13:16.839 "ffdhe4096", 00:13:16.839 "ffdhe6144", 00:13:16.839 "ffdhe8192" 00:13:16.839 ], 00:13:16.839 "dhchap_digests": [ 00:13:16.839 "sha256", 00:13:16.839 "sha384", 00:13:16.839 "sha512" 00:13:16.839 ], 00:13:16.839 "disable_auto_failback": false, 00:13:16.839 "fast_io_fail_timeout_sec": 0, 00:13:16.839 "generate_uuids": false, 00:13:16.839 "high_priority_weight": 0, 00:13:16.839 "io_path_stat": false, 00:13:16.839 "io_queue_requests": 512, 00:13:16.839 "keep_alive_timeout_ms": 10000, 00:13:16.839 "low_priority_weight": 0, 00:13:16.839 "medium_priority_weight": 0, 00:13:16.839 "nvme_adminq_poll_period_us": 10000, 00:13:16.839 "nvme_error_stat": false, 00:13:16.839 "nvme_ioq_poll_period_us": 0, 00:13:16.839 "rdma_cm_event_timeout_ms": 0, 00:13:16.839 "rdma_max_cq_size": 0, 00:13:16.839 "rdma_srq_size": 0, 00:13:16.839 "reconnect_delay_sec": 0, 00:13:16.839 "timeout_admin_us": 0, 00:13:16.839 "timeout_us": 0, 00:13:16.839 "transport_ack_timeout": 0, 00:13:16.839 "transport_retry_count": 4, 00:13:16.839 "transport_tos": 0 00:13:16.839 } 00:13:16.839 }, 00:13:16.839 { 00:13:16.839 "method": "bdev_nvme_attach_controller", 00:13:16.839 "params": { 00:13:16.839 "adrfam": "IPv4", 00:13:16.839 "ctrlr_loss_timeout_sec": 0, 00:13:16.839 "ddgst": false, 00:13:16.839 "fast_io_fail_timeout_sec": 0, 00:13:16.839 "hdgst": false, 00:13:16.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:16.839 "name": "TLSTEST", 00:13:16.839 "prchk_guard": false, 00:13:16.839 "prchk_reftag": false, 00:13:16.839 "psk": "/tmp/tmp.iCkTIHcImr", 00:13:16.839 "reconnect_delay_sec": 0, 00:13:16.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:16.839 "traddr": "10.0.0.2", 00:13:16.839 "trsvcid": "4420", 00:13:16.839 "trtype": "TCP" 00:13:16.839 } 00:13:16.839 }, 00:13:16.839 { 00:13:16.839 "method": "bdev_nvme_set_hotplug", 00:13:16.839 "params": { 00:13:16.839 "enable": false, 00:13:16.839 "period_us": 100000 00:13:16.839 } 00:13:16.839 }, 00:13:16.839 { 00:13:16.839 "method": "bdev_wait_for_examine" 00:13:16.839 } 00:13:16.839 ] 00:13:16.839 }, 00:13:16.839 { 00:13:16.839 "subsystem": "nbd", 00:13:16.839 "config": [] 00:13:16.839 } 00:13:16.839 ] 00:13:16.839 }' 00:13:16.839 23:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:16.839 23:00:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.095 [2024-05-14 23:00:29.261132] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:17.095 [2024-05-14 23:00:29.261229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78502 ] 00:13:17.095 [2024-05-14 23:00:29.401843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.095 [2024-05-14 23:00:29.475489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.353 [2024-05-14 23:00:29.601878] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:17.353 [2024-05-14 23:00:29.602003] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:17.918 23:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:17.918 23:00:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:17.918 23:00:30 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:18.176 Running I/O for 10 seconds... 00:13:28.145 00:13:28.145 Latency(us) 00:13:28.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.145 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:28.145 Verification LBA range: start 0x0 length 0x2000 00:13:28.145 TLSTESTn1 : 10.02 3491.43 13.64 0.00 0.00 36587.18 7566.43 64821.06 00:13:28.145 =================================================================================================================== 00:13:28.145 Total : 3491.43 13.64 0.00 0.00 36587.18 7566.43 64821.06 00:13:28.145 0 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 78502 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78502 ']' 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78502 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78502 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:28.145 killing process with pid 78502 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78502' 00:13:28.145 Received shutdown signal, test time was about 10.000000 seconds 00:13:28.145 00:13:28.145 Latency(us) 00:13:28.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.145 =================================================================================================================== 00:13:28.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78502 00:13:28.145 [2024-05-14 23:00:40.490408] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:28.145 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78502 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 78458 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78458 ']' 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78458 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78458 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:28.403 killing process with pid 78458 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78458' 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78458 00:13:28.403 [2024-05-14 23:00:40.706476] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:28.403 [2024-05-14 23:00:40.706517] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:28.403 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78458 00:13:28.660 23:00:40 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:13:28.660 23:00:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:28.660 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:28.660 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.660 23:00:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78657 00:13:28.660 23:00:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:28.661 23:00:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78657 00:13:28.661 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78657 ']' 00:13:28.661 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.661 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:28.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.661 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.661 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:28.661 23:00:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.661 [2024-05-14 23:00:40.964864] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:28.661 [2024-05-14 23:00:40.964955] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.919 [2024-05-14 23:00:41.104094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.919 [2024-05-14 23:00:41.164285] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.919 [2024-05-14 23:00:41.164335] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.919 [2024-05-14 23:00:41.164347] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.919 [2024-05-14 23:00:41.164355] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.919 [2024-05-14 23:00:41.164363] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.919 [2024-05-14 23:00:41.164389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.854 23:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:29.854 23:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:29.854 23:00:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.854 23:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.854 23:00:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:29.854 23:00:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.854 23:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.iCkTIHcImr 00:13:29.854 23:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iCkTIHcImr 00:13:29.854 23:00:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:30.112 [2024-05-14 23:00:42.244934] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.112 23:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:30.371 23:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:30.630 [2024-05-14 23:00:42.829069] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:30.630 [2024-05-14 23:00:42.829201] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:30.630 [2024-05-14 23:00:42.829395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.630 23:00:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:30.888 malloc0 00:13:30.888 23:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:31.147 23:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iCkTIHcImr 00:13:31.405 [2024-05-14 23:00:43.644273] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:31.405 23:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=78761 00:13:31.405 23:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:31.405 23:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:31.405 23:00:43 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 78761 /var/tmp/bdevperf.sock 00:13:31.405 23:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78761 ']' 00:13:31.405 23:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.405 23:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:31.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.405 23:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.405 23:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:31.405 23:00:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.405 [2024-05-14 23:00:43.720801] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:31.405 [2024-05-14 23:00:43.720891] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78761 ] 00:13:31.664 [2024-05-14 23:00:43.863521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.664 [2024-05-14 23:00:43.938083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.664 23:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:31.664 23:00:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:31.664 23:00:44 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iCkTIHcImr 00:13:31.923 23:00:44 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:32.181 [2024-05-14 23:00:44.572381] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:32.440 nvme0n1 00:13:32.440 23:00:44 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:32.440 Running I/O for 1 seconds... 00:13:33.818 00:13:33.818 Latency(us) 00:13:33.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.818 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:33.818 Verification LBA range: start 0x0 length 0x2000 00:13:33.818 nvme0n1 : 1.02 3519.74 13.75 0.00 0.00 35887.49 4468.36 38844.97 00:13:33.818 =================================================================================================================== 00:13:33.818 Total : 3519.74 13.75 0.00 0.00 35887.49 4468.36 38844.97 00:13:33.818 0 00:13:33.818 23:00:45 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 78761 00:13:33.818 23:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78761 ']' 00:13:33.818 23:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78761 00:13:33.818 23:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:33.818 23:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:33.818 23:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78761 00:13:33.818 23:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:33.818 23:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:33.819 23:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78761' 00:13:33.819 killing process with pid 78761 00:13:33.819 Received shutdown signal, test time was about 1.000000 seconds 00:13:33.819 00:13:33.819 Latency(us) 00:13:33.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.819 =================================================================================================================== 00:13:33.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.819 23:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78761 00:13:33.819 23:00:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78761 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 78657 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78657 ']' 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78657 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78657 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:33.819 killing process with pid 78657 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78657' 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78657 00:13:33.819 [2024-05-14 23:00:46.070576] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:33.819 [2024-05-14 23:00:46.070617] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:33.819 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78657 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78817 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78817 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78817 ']' 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:34.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:34.077 23:00:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.077 [2024-05-14 23:00:46.334564] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:34.077 [2024-05-14 23:00:46.334660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.334 [2024-05-14 23:00:46.473750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.334 [2024-05-14 23:00:46.541339] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.334 [2024-05-14 23:00:46.541394] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.334 [2024-05-14 23:00:46.541410] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.334 [2024-05-14 23:00:46.541419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.334 [2024-05-14 23:00:46.541426] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.334 [2024-05-14 23:00:46.541469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:35.269 [2024-05-14 23:00:47.398178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.269 malloc0 00:13:35.269 [2024-05-14 23:00:47.425615] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:35.269 [2024-05-14 23:00:47.425724] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:35.269 [2024-05-14 23:00:47.425936] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=78867 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 78867 /var/tmp/bdevperf.sock 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78867 ']' 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:35.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:35.269 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:35.269 [2024-05-14 23:00:47.510992] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:35.269 [2024-05-14 23:00:47.511092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78867 ] 00:13:35.269 [2024-05-14 23:00:47.649196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.527 [2024-05-14 23:00:47.724139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.527 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:35.527 23:00:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:35.527 23:00:47 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iCkTIHcImr 00:13:35.786 23:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:36.043 [2024-05-14 23:00:48.403695] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:36.301 nvme0n1 00:13:36.301 23:00:48 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:36.301 Running I/O for 1 seconds... 00:13:37.677 00:13:37.677 Latency(us) 00:13:37.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.677 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:37.677 Verification LBA range: start 0x0 length 0x2000 00:13:37.677 nvme0n1 : 1.03 3692.40 14.42 0.00 0.00 34154.14 9770.82 24903.68 00:13:37.677 =================================================================================================================== 00:13:37.677 Total : 3692.40 14.42 0.00 0.00 34154.14 9770.82 24903.68 00:13:37.677 0 00:13:37.677 23:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:13:37.677 23:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.677 23:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.677 23:00:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.677 23:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:13:37.677 "subsystems": [ 00:13:37.677 { 00:13:37.677 "subsystem": "keyring", 00:13:37.677 "config": [ 00:13:37.677 { 00:13:37.677 "method": "keyring_file_add_key", 00:13:37.677 "params": { 00:13:37.677 "name": "key0", 00:13:37.677 "path": "/tmp/tmp.iCkTIHcImr" 00:13:37.677 } 00:13:37.677 } 00:13:37.677 ] 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "subsystem": "iobuf", 00:13:37.677 "config": [ 00:13:37.677 { 00:13:37.677 "method": "iobuf_set_options", 00:13:37.677 "params": { 00:13:37.677 "large_bufsize": 135168, 00:13:37.677 "large_pool_count": 1024, 00:13:37.677 "small_bufsize": 8192, 00:13:37.677 "small_pool_count": 8192 00:13:37.677 } 00:13:37.677 } 00:13:37.677 ] 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "subsystem": "sock", 00:13:37.677 "config": [ 00:13:37.677 { 00:13:37.677 "method": "sock_impl_set_options", 00:13:37.677 "params": { 00:13:37.677 "enable_ktls": false, 00:13:37.677 "enable_placement_id": 0, 00:13:37.677 "enable_quickack": false, 00:13:37.677 "enable_recv_pipe": true, 00:13:37.677 "enable_zerocopy_send_client": false, 00:13:37.677 "enable_zerocopy_send_server": true, 00:13:37.677 "impl_name": "posix", 00:13:37.677 "recv_buf_size": 2097152, 00:13:37.677 "send_buf_size": 2097152, 00:13:37.677 "tls_version": 0, 00:13:37.677 "zerocopy_threshold": 0 00:13:37.677 } 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "method": "sock_impl_set_options", 00:13:37.677 "params": { 00:13:37.677 "enable_ktls": false, 00:13:37.677 "enable_placement_id": 0, 00:13:37.677 "enable_quickack": false, 00:13:37.677 "enable_recv_pipe": true, 00:13:37.677 "enable_zerocopy_send_client": false, 00:13:37.677 "enable_zerocopy_send_server": true, 00:13:37.677 "impl_name": "ssl", 00:13:37.677 "recv_buf_size": 4096, 00:13:37.677 "send_buf_size": 4096, 00:13:37.677 "tls_version": 0, 00:13:37.677 "zerocopy_threshold": 0 00:13:37.677 } 00:13:37.677 } 00:13:37.677 ] 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "subsystem": "vmd", 00:13:37.677 "config": [] 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "subsystem": "accel", 00:13:37.677 "config": [ 00:13:37.677 { 00:13:37.677 "method": "accel_set_options", 00:13:37.677 "params": { 00:13:37.677 "buf_count": 2048, 00:13:37.677 "large_cache_size": 16, 00:13:37.677 "sequence_count": 2048, 00:13:37.677 "small_cache_size": 128, 00:13:37.677 "task_count": 2048 00:13:37.677 } 00:13:37.677 } 00:13:37.677 ] 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "subsystem": "bdev", 00:13:37.677 "config": [ 00:13:37.677 { 00:13:37.677 "method": "bdev_set_options", 00:13:37.677 "params": { 00:13:37.677 "bdev_auto_examine": true, 00:13:37.677 "bdev_io_cache_size": 256, 00:13:37.677 "bdev_io_pool_size": 65535, 00:13:37.677 "iobuf_large_cache_size": 16, 00:13:37.677 "iobuf_small_cache_size": 128 00:13:37.677 } 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "method": "bdev_raid_set_options", 00:13:37.677 "params": { 00:13:37.677 "process_window_size_kb": 1024 00:13:37.677 } 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "method": "bdev_iscsi_set_options", 00:13:37.677 "params": { 00:13:37.677 "timeout_sec": 30 00:13:37.677 } 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "method": "bdev_nvme_set_options", 00:13:37.677 "params": { 00:13:37.677 "action_on_timeout": "none", 00:13:37.677 "allow_accel_sequence": false, 00:13:37.677 "arbitration_burst": 0, 00:13:37.677 "bdev_retry_count": 3, 00:13:37.677 "ctrlr_loss_timeout_sec": 0, 00:13:37.677 "delay_cmd_submit": true, 00:13:37.677 "dhchap_dhgroups": [ 00:13:37.677 "null", 00:13:37.677 "ffdhe2048", 00:13:37.677 "ffdhe3072", 00:13:37.677 "ffdhe4096", 00:13:37.677 "ffdhe6144", 00:13:37.677 "ffdhe8192" 00:13:37.677 ], 00:13:37.677 "dhchap_digests": [ 00:13:37.677 "sha256", 00:13:37.677 "sha384", 00:13:37.677 "sha512" 00:13:37.677 ], 00:13:37.677 "disable_auto_failback": false, 00:13:37.677 "fast_io_fail_timeout_sec": 0, 00:13:37.677 "generate_uuids": false, 00:13:37.677 "high_priority_weight": 0, 00:13:37.677 "io_path_stat": false, 00:13:37.677 "io_queue_requests": 0, 00:13:37.677 "keep_alive_timeout_ms": 10000, 00:13:37.677 "low_priority_weight": 0, 00:13:37.677 "medium_priority_weight": 0, 00:13:37.677 "nvme_adminq_poll_period_us": 10000, 00:13:37.677 "nvme_error_stat": false, 00:13:37.677 "nvme_ioq_poll_period_us": 0, 00:13:37.677 "rdma_cm_event_timeout_ms": 0, 00:13:37.677 "rdma_max_cq_size": 0, 00:13:37.677 "rdma_srq_size": 0, 00:13:37.677 "reconnect_delay_sec": 0, 00:13:37.677 "timeout_admin_us": 0, 00:13:37.677 "timeout_us": 0, 00:13:37.677 "transport_ack_timeout": 0, 00:13:37.677 "transport_retry_count": 4, 00:13:37.677 "transport_tos": 0 00:13:37.677 } 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "method": "bdev_nvme_set_hotplug", 00:13:37.677 "params": { 00:13:37.677 "enable": false, 00:13:37.677 "period_us": 100000 00:13:37.677 } 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "method": "bdev_malloc_create", 00:13:37.677 "params": { 00:13:37.677 "block_size": 4096, 00:13:37.677 "name": "malloc0", 00:13:37.677 "num_blocks": 8192, 00:13:37.677 "optimal_io_boundary": 0, 00:13:37.677 "physical_block_size": 4096, 00:13:37.677 "uuid": "6ff4b729-53df-4708-bb2f-3c20627872ac" 00:13:37.677 } 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "method": "bdev_wait_for_examine" 00:13:37.677 } 00:13:37.677 ] 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "subsystem": "nbd", 00:13:37.677 "config": [] 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "subsystem": "scheduler", 00:13:37.677 "config": [ 00:13:37.677 { 00:13:37.677 "method": "framework_set_scheduler", 00:13:37.677 "params": { 00:13:37.677 "name": "static" 00:13:37.677 } 00:13:37.677 } 00:13:37.677 ] 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "subsystem": "nvmf", 00:13:37.677 "config": [ 00:13:37.677 { 00:13:37.677 "method": "nvmf_set_config", 00:13:37.677 "params": { 00:13:37.677 "admin_cmd_passthru": { 00:13:37.677 "identify_ctrlr": false 00:13:37.677 }, 00:13:37.677 "discovery_filter": "match_any" 00:13:37.677 } 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "method": "nvmf_set_max_subsystems", 00:13:37.677 "params": { 00:13:37.677 "max_subsystems": 1024 00:13:37.677 } 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "method": "nvmf_set_crdt", 00:13:37.677 "params": { 00:13:37.677 "crdt1": 0, 00:13:37.677 "crdt2": 0, 00:13:37.677 "crdt3": 0 00:13:37.677 } 00:13:37.677 }, 00:13:37.677 { 00:13:37.677 "method": "nvmf_create_transport", 00:13:37.677 "params": { 00:13:37.677 "abort_timeout_sec": 1, 00:13:37.677 "ack_timeout": 0, 00:13:37.678 "buf_cache_size": 4294967295, 00:13:37.678 "c2h_success": false, 00:13:37.678 "data_wr_pool_size": 0, 00:13:37.678 "dif_insert_or_strip": false, 00:13:37.678 "in_capsule_data_size": 4096, 00:13:37.678 "io_unit_size": 131072, 00:13:37.678 "max_aq_depth": 128, 00:13:37.678 "max_io_qpairs_per_ctrlr": 127, 00:13:37.678 "max_io_size": 131072, 00:13:37.678 "max_queue_depth": 128, 00:13:37.678 "num_shared_buffers": 511, 00:13:37.678 "sock_priority": 0, 00:13:37.678 "trtype": "TCP", 00:13:37.678 "zcopy": false 00:13:37.678 } 00:13:37.678 }, 00:13:37.678 { 00:13:37.678 "method": "nvmf_create_subsystem", 00:13:37.678 "params": { 00:13:37.678 "allow_any_host": false, 00:13:37.678 "ana_reporting": false, 00:13:37.678 "max_cntlid": 65519, 00:13:37.678 "max_namespaces": 32, 00:13:37.678 "min_cntlid": 1, 00:13:37.678 "model_number": "SPDK bdev Controller", 00:13:37.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.678 "serial_number": "00000000000000000000" 00:13:37.678 } 00:13:37.678 }, 00:13:37.678 { 00:13:37.678 "method": "nvmf_subsystem_add_host", 00:13:37.678 "params": { 00:13:37.678 "host": "nqn.2016-06.io.spdk:host1", 00:13:37.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.678 "psk": "key0" 00:13:37.678 } 00:13:37.678 }, 00:13:37.678 { 00:13:37.678 "method": "nvmf_subsystem_add_ns", 00:13:37.678 "params": { 00:13:37.678 "namespace": { 00:13:37.678 "bdev_name": "malloc0", 00:13:37.678 "nguid": "6FF4B72953DF4708BB2F3C20627872AC", 00:13:37.678 "no_auto_visible": false, 00:13:37.678 "nsid": 1, 00:13:37.678 "uuid": "6ff4b729-53df-4708-bb2f-3c20627872ac" 00:13:37.678 }, 00:13:37.678 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:13:37.678 } 00:13:37.678 }, 00:13:37.678 { 00:13:37.678 "method": "nvmf_subsystem_add_listener", 00:13:37.678 "params": { 00:13:37.678 "listen_address": { 00:13:37.678 "adrfam": "IPv4", 00:13:37.678 "traddr": "10.0.0.2", 00:13:37.678 "trsvcid": "4420", 00:13:37.678 "trtype": "TCP" 00:13:37.678 }, 00:13:37.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.678 "secure_channel": true 00:13:37.678 } 00:13:37.678 } 00:13:37.678 ] 00:13:37.678 } 00:13:37.678 ] 00:13:37.678 }' 00:13:37.678 23:00:49 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:37.937 23:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:13:37.937 "subsystems": [ 00:13:37.937 { 00:13:37.937 "subsystem": "keyring", 00:13:37.937 "config": [ 00:13:37.937 { 00:13:37.937 "method": "keyring_file_add_key", 00:13:37.937 "params": { 00:13:37.937 "name": "key0", 00:13:37.937 "path": "/tmp/tmp.iCkTIHcImr" 00:13:37.937 } 00:13:37.937 } 00:13:37.937 ] 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "subsystem": "iobuf", 00:13:37.937 "config": [ 00:13:37.937 { 00:13:37.937 "method": "iobuf_set_options", 00:13:37.937 "params": { 00:13:37.937 "large_bufsize": 135168, 00:13:37.937 "large_pool_count": 1024, 00:13:37.937 "small_bufsize": 8192, 00:13:37.937 "small_pool_count": 8192 00:13:37.937 } 00:13:37.937 } 00:13:37.937 ] 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "subsystem": "sock", 00:13:37.937 "config": [ 00:13:37.937 { 00:13:37.937 "method": "sock_impl_set_options", 00:13:37.937 "params": { 00:13:37.937 "enable_ktls": false, 00:13:37.937 "enable_placement_id": 0, 00:13:37.937 "enable_quickack": false, 00:13:37.937 "enable_recv_pipe": true, 00:13:37.937 "enable_zerocopy_send_client": false, 00:13:37.937 "enable_zerocopy_send_server": true, 00:13:37.937 "impl_name": "posix", 00:13:37.937 "recv_buf_size": 2097152, 00:13:37.937 "send_buf_size": 2097152, 00:13:37.937 "tls_version": 0, 00:13:37.937 "zerocopy_threshold": 0 00:13:37.937 } 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "method": "sock_impl_set_options", 00:13:37.937 "params": { 00:13:37.937 "enable_ktls": false, 00:13:37.937 "enable_placement_id": 0, 00:13:37.937 "enable_quickack": false, 00:13:37.937 "enable_recv_pipe": true, 00:13:37.937 "enable_zerocopy_send_client": false, 00:13:37.937 "enable_zerocopy_send_server": true, 00:13:37.937 "impl_name": "ssl", 00:13:37.937 "recv_buf_size": 4096, 00:13:37.937 "send_buf_size": 4096, 00:13:37.937 "tls_version": 0, 00:13:37.937 "zerocopy_threshold": 0 00:13:37.937 } 00:13:37.937 } 00:13:37.937 ] 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "subsystem": "vmd", 00:13:37.937 "config": [] 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "subsystem": "accel", 00:13:37.937 "config": [ 00:13:37.937 { 00:13:37.937 "method": "accel_set_options", 00:13:37.937 "params": { 00:13:37.937 "buf_count": 2048, 00:13:37.937 "large_cache_size": 16, 00:13:37.937 "sequence_count": 2048, 00:13:37.937 "small_cache_size": 128, 00:13:37.937 "task_count": 2048 00:13:37.937 } 00:13:37.937 } 00:13:37.937 ] 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "subsystem": "bdev", 00:13:37.937 "config": [ 00:13:37.937 { 00:13:37.937 "method": "bdev_set_options", 00:13:37.937 "params": { 00:13:37.937 "bdev_auto_examine": true, 00:13:37.937 "bdev_io_cache_size": 256, 00:13:37.937 "bdev_io_pool_size": 65535, 00:13:37.937 "iobuf_large_cache_size": 16, 00:13:37.937 "iobuf_small_cache_size": 128 00:13:37.937 } 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "method": "bdev_raid_set_options", 00:13:37.937 "params": { 00:13:37.937 "process_window_size_kb": 1024 00:13:37.937 } 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "method": "bdev_iscsi_set_options", 00:13:37.937 "params": { 00:13:37.937 "timeout_sec": 30 00:13:37.937 } 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "method": "bdev_nvme_set_options", 00:13:37.937 "params": { 00:13:37.937 "action_on_timeout": "none", 00:13:37.937 "allow_accel_sequence": false, 00:13:37.937 "arbitration_burst": 0, 00:13:37.937 "bdev_retry_count": 3, 00:13:37.937 "ctrlr_loss_timeout_sec": 0, 00:13:37.937 "delay_cmd_submit": true, 00:13:37.937 "dhchap_dhgroups": [ 00:13:37.937 "null", 00:13:37.937 "ffdhe2048", 00:13:37.937 "ffdhe3072", 00:13:37.937 "ffdhe4096", 00:13:37.937 "ffdhe6144", 00:13:37.937 "ffdhe8192" 00:13:37.937 ], 00:13:37.937 "dhchap_digests": [ 00:13:37.937 "sha256", 00:13:37.937 "sha384", 00:13:37.937 "sha512" 00:13:37.937 ], 00:13:37.937 "disable_auto_failback": false, 00:13:37.937 "fast_io_fail_timeout_sec": 0, 00:13:37.937 "generate_uuids": false, 00:13:37.937 "high_priority_weight": 0, 00:13:37.937 "io_path_stat": false, 00:13:37.937 "io_queue_requests": 512, 00:13:37.937 "keep_alive_timeout_ms": 10000, 00:13:37.937 "low_priority_weight": 0, 00:13:37.937 "medium_priority_weight": 0, 00:13:37.937 "nvme_adminq_poll_period_us": 10000, 00:13:37.937 "nvme_error_stat": false, 00:13:37.937 "nvme_ioq_poll_period_us": 0, 00:13:37.937 "rdma_cm_event_timeout_ms": 0, 00:13:37.938 "rdma_max_cq_size": 0, 00:13:37.938 "rdma_srq_size": 0, 00:13:37.938 "reconnect_delay_sec": 0, 00:13:37.938 "timeout_admin_us": 0, 00:13:37.938 "timeout_us": 0, 00:13:37.938 "transport_ack_timeout": 0, 00:13:37.938 "transport_retry_count": 4, 00:13:37.938 "transport_tos": 0 00:13:37.938 } 00:13:37.938 }, 00:13:37.938 { 00:13:37.938 "method": "bdev_nvme_attach_controller", 00:13:37.938 "params": { 00:13:37.938 "adrfam": "IPv4", 00:13:37.938 "ctrlr_loss_timeout_sec": 0, 00:13:37.938 "ddgst": false, 00:13:37.938 "fast_io_fail_timeout_sec": 0, 00:13:37.938 "hdgst": false, 00:13:37.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:37.938 "name": "nvme0", 00:13:37.938 "prchk_guard": false, 00:13:37.938 "prchk_reftag": false, 00:13:37.938 "psk": "key0", 00:13:37.938 "reconnect_delay_sec": 0, 00:13:37.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.938 "traddr": "10.0.0.2", 00:13:37.938 "trsvcid": "4420", 00:13:37.938 "trtype": "TCP" 00:13:37.938 } 00:13:37.938 }, 00:13:37.938 { 00:13:37.938 "method": "bdev_nvme_set_hotplug", 00:13:37.938 "params": { 00:13:37.938 "enable": false, 00:13:37.938 "period_us": 100000 00:13:37.938 } 00:13:37.938 }, 00:13:37.938 { 00:13:37.938 "method": "bdev_enable_histogram", 00:13:37.938 "params": { 00:13:37.938 "enable": true, 00:13:37.938 "name": "nvme0n1" 00:13:37.938 } 00:13:37.938 }, 00:13:37.938 { 00:13:37.938 "method": "bdev_wait_for_examine" 00:13:37.938 } 00:13:37.938 ] 00:13:37.938 }, 00:13:37.938 { 00:13:37.938 "subsystem": "nbd", 00:13:37.938 "config": [] 00:13:37.938 } 00:13:37.938 ] 00:13:37.938 }' 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 78867 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78867 ']' 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78867 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78867 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:37.938 killing process with pid 78867 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78867' 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78867 00:13:37.938 Received shutdown signal, test time was about 1.000000 seconds 00:13:37.938 00:13:37.938 Latency(us) 00:13:37.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.938 =================================================================================================================== 00:13:37.938 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:37.938 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78867 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 78817 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78817 ']' 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78817 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78817 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:38.197 killing process with pid 78817 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78817' 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78817 00:13:38.197 [2024-05-14 23:00:50.380820] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78817 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.197 23:00:50 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:13:38.197 "subsystems": [ 00:13:38.197 { 00:13:38.197 "subsystem": "keyring", 00:13:38.197 "config": [ 00:13:38.197 { 00:13:38.197 "method": "keyring_file_add_key", 00:13:38.197 "params": { 00:13:38.197 "name": "key0", 00:13:38.197 "path": "/tmp/tmp.iCkTIHcImr" 00:13:38.197 } 00:13:38.197 } 00:13:38.197 ] 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "subsystem": "iobuf", 00:13:38.197 "config": [ 00:13:38.197 { 00:13:38.197 "method": "iobuf_set_options", 00:13:38.197 "params": { 00:13:38.197 "large_bufsize": 135168, 00:13:38.197 "large_pool_count": 1024, 00:13:38.197 "small_bufsize": 8192, 00:13:38.197 "small_pool_count": 8192 00:13:38.197 } 00:13:38.197 } 00:13:38.197 ] 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "subsystem": "sock", 00:13:38.197 "config": [ 00:13:38.197 { 00:13:38.197 "method": "sock_impl_set_options", 00:13:38.197 "params": { 00:13:38.197 "enable_ktls": false, 00:13:38.197 "enable_placement_id": 0, 00:13:38.197 "enable_quickack": false, 00:13:38.197 "enable_recv_pipe": true, 00:13:38.197 "enable_zerocopy_send_client": false, 00:13:38.197 "enable_zerocopy_send_server": true, 00:13:38.197 "impl_name": "posix", 00:13:38.197 "recv_buf_size": 2097152, 00:13:38.197 "send_buf_size": 2097152, 00:13:38.197 "tls_version": 0, 00:13:38.197 "zerocopy_threshold": 0 00:13:38.197 } 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "method": "sock_impl_set_options", 00:13:38.197 "params": { 00:13:38.197 "enable_ktls": false, 00:13:38.197 "enable_placement_id": 0, 00:13:38.197 "enable_quickack": false, 00:13:38.197 "enable_recv_pipe": true, 00:13:38.197 "enable_zerocopy_send_client": false, 00:13:38.197 "enable_zerocopy_send_server": true, 00:13:38.197 "impl_name": "ssl", 00:13:38.197 "recv_buf_size": 4096, 00:13:38.197 "send_buf_size": 4096, 00:13:38.197 "tls_version": 0, 00:13:38.197 "zerocopy_threshold": 0 00:13:38.197 } 00:13:38.197 } 00:13:38.197 ] 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "subsystem": "vmd", 00:13:38.197 "config": [] 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "subsystem": "accel", 00:13:38.197 "config": [ 00:13:38.197 { 00:13:38.197 "method": "accel_set_options", 00:13:38.197 "params": { 00:13:38.197 "buf_count": 2048, 00:13:38.197 "large_cache_size": 16, 00:13:38.197 "sequence_count": 2048, 00:13:38.197 "small_cache_size": 128, 00:13:38.197 "task_count": 2048 00:13:38.197 } 00:13:38.197 } 00:13:38.197 ] 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "subsystem": "bdev", 00:13:38.197 "config": [ 00:13:38.197 { 00:13:38.197 "method": "bdev_set_options", 00:13:38.197 "params": { 00:13:38.197 "bdev_auto_examine": true, 00:13:38.197 "bdev_io_cache_size": 256, 00:13:38.197 "bdev_io_pool_size": 65535, 00:13:38.197 "iobuf_large_cache_size": 16, 00:13:38.197 "iobuf_small_cache_size": 128 00:13:38.197 } 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "method": "bdev_raid_set_options", 00:13:38.197 "params": { 00:13:38.197 "process_window_size_kb": 1024 00:13:38.197 } 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "method": "bdev_iscsi_set_options", 00:13:38.197 "params": { 00:13:38.197 "timeout_sec": 30 00:13:38.197 } 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "method": "bdev_nvme_set_options", 00:13:38.197 "params": { 00:13:38.197 "action_on_timeout": "none", 00:13:38.197 "allow_accel_sequence": false, 00:13:38.197 "arbitration_burst": 0, 00:13:38.197 "bdev_retry_count": 3, 00:13:38.197 "ctrlr_loss_timeout_sec": 0, 00:13:38.197 "delay_cmd_submit": true, 00:13:38.197 "dhchap_dhgroups": [ 00:13:38.197 "null", 00:13:38.197 "ffdhe2048", 00:13:38.197 "ffdhe3072", 00:13:38.197 "ffdhe4096", 00:13:38.197 "ffdhe6144", 00:13:38.197 "ffdhe8192" 00:13:38.197 ], 00:13:38.197 "dhchap_digests": [ 00:13:38.197 "sha256", 00:13:38.197 "sha384", 00:13:38.197 "sha512" 00:13:38.197 ], 00:13:38.197 "disable_auto_failback": false, 00:13:38.197 "fast_io_fail_timeout_sec": 0, 00:13:38.197 "generate_uuids": false, 00:13:38.197 "high_priority_weight": 0, 00:13:38.197 "io_path_stat": false, 00:13:38.197 "io_queue_requests": 0, 00:13:38.197 "keep_alive_timeout_ms": 10000, 00:13:38.197 "low_priority_weight": 0, 00:13:38.197 "medium_priority_weight": 0, 00:13:38.197 "nvme_adminq_poll_period_us": 10000, 00:13:38.197 "nvme_error_stat": false, 00:13:38.197 "nvme_ioq_poll_period_us": 0, 00:13:38.197 "rdma_cm_event_timeout_ms": 0, 00:13:38.197 "rdma_max_cq_size": 0, 00:13:38.197 "rdma_srq_size": 0, 00:13:38.197 "reconnect_delay_sec": 0, 00:13:38.197 "timeout_admin_us": 0, 00:13:38.197 "timeout_us": 0, 00:13:38.197 "transport_ack_timeout": 0, 00:13:38.197 "transport_retry_count": 4, 00:13:38.197 "transport_tos": 0 00:13:38.197 } 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "method": "bdev_nvme_set_hotplug", 00:13:38.197 "params": { 00:13:38.197 "enable": false, 00:13:38.197 "period_us": 100000 00:13:38.197 } 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "method": "bdev_malloc_create", 00:13:38.197 "params": { 00:13:38.197 "block_size": 4096, 00:13:38.197 "name": "malloc0", 00:13:38.197 "num_blocks": 8192, 00:13:38.197 "optimal_io_boundary": 0, 00:13:38.197 "physical_block_size": 4096, 00:13:38.197 "uuid": "6ff4b729-53df-4708-bb2f-3c20627872ac" 00:13:38.197 } 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "method": "bdev_wait_for_examine" 00:13:38.197 } 00:13:38.197 ] 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "subsystem": "nbd", 00:13:38.197 "config": [] 00:13:38.197 }, 00:13:38.197 { 00:13:38.197 "subsystem": "scheduler", 00:13:38.197 "config": [ 00:13:38.197 { 00:13:38.197 "method": "framework_set_scheduler", 00:13:38.197 "params": { 00:13:38.197 "name": "static" 00:13:38.198 } 00:13:38.198 } 00:13:38.198 ] 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "subsystem": "nvmf", 00:13:38.198 "config": [ 00:13:38.198 { 00:13:38.198 "method": "nvmf_set_config", 00:13:38.198 "params": { 00:13:38.198 "admin_cmd_passthru": { 00:13:38.198 "identify_ctrlr": false 00:13:38.198 }, 00:13:38.198 "discovery_filter": "match_any" 00:13:38.198 } 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "method": "nvmf_set_max_subsystems", 00:13:38.198 "params": { 00:13:38.198 "max_subsystems": 1024 00:13:38.198 } 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "method": "nvmf_set_crdt", 00:13:38.198 "params": { 00:13:38.198 "crdt1": 0, 00:13:38.198 "crdt2": 0, 00:13:38.198 "crdt3": 0 00:13:38.198 } 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "method": "nvmf_create_transport", 00:13:38.198 "params": { 00:13:38.198 "abort_timeout_sec": 1, 00:13:38.198 "ack_timeout": 0, 00:13:38.198 "buf_cache_size": 4294967295, 00:13:38.198 "c2h_success": false, 00:13:38.198 "data_wr_pool_size": 0, 00:13:38.198 "dif_insert_or_strip": false, 00:13:38.198 "in_capsule_data_size": 4096, 00:13:38.198 "io_unit_size": 131072, 00:13:38.198 "max_aq_depth": 128, 00:13:38.198 "max_io_qpairs_per_ctrlr": 127, 00:13:38.198 "max_io_size": 131072, 00:13:38.198 "max_queue_depth": 128, 00:13:38.198 "num_shared_buffers": 511, 00:13:38.198 "sock_priority": 0, 00:13:38.198 "trtype": "TCP", 00:13:38.198 "zcopy": false 00:13:38.198 } 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "method": "nvmf_create_subsystem", 00:13:38.198 "params": { 00:13:38.198 "allow_any_host": false, 00:13:38.198 "ana_reporting": false, 00:13:38.198 "max_cntlid": 65519, 00:13:38.198 "max_namespaces": 32, 00:13:38.198 "min_cntlid": 1, 00:13:38.198 "m 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:38.198 odel_number": "SPDK bdev Controller", 00:13:38.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.198 "serial_number": "00000000000000000000" 00:13:38.198 } 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "method": "nvmf_subsystem_add_host", 00:13:38.198 "params": { 00:13:38.198 "host": "nqn.2016-06.io.spdk:host1", 00:13:38.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.198 "psk": "key0" 00:13:38.198 } 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "method": "nvmf_subsystem_add_ns", 00:13:38.198 "params": { 00:13:38.198 "namespace": { 00:13:38.198 "bdev_name": "malloc0", 00:13:38.198 "nguid": "6FF4B72953DF4708BB2F3C20627872AC", 00:13:38.198 "no_auto_visible": false, 00:13:38.198 "nsid": 1, 00:13:38.198 "uuid": "6ff4b729-53df-4708-bb2f-3c20627872ac" 00:13:38.198 }, 00:13:38.198 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:13:38.198 } 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "method": "nvmf_subsystem_add_listener", 00:13:38.198 "params": { 00:13:38.198 "listen_address": { 00:13:38.198 "adrfam": "IPv4", 00:13:38.198 "traddr": "10.0.0.2", 00:13:38.198 "trsvcid": "4420", 00:13:38.198 "trtype": "TCP" 00:13:38.198 }, 00:13:38.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.198 "secure_channel": true 00:13:38.198 } 00:13:38.198 } 00:13:38.198 ] 00:13:38.198 } 00:13:38.198 ] 00:13:38.198 }' 00:13:38.198 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.198 23:00:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78944 00:13:38.198 23:00:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78944 00:13:38.198 23:00:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:38.198 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78944 ']' 00:13:38.198 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.198 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:38.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.198 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.198 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:38.198 23:00:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.456 [2024-05-14 23:00:50.633058] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:38.456 [2024-05-14 23:00:50.633146] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.456 [2024-05-14 23:00:50.764501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.456 [2024-05-14 23:00:50.824550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.456 [2024-05-14 23:00:50.824616] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.456 [2024-05-14 23:00:50.824630] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.456 [2024-05-14 23:00:50.824638] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.456 [2024-05-14 23:00:50.824645] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.456 [2024-05-14 23:00:50.824726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.716 [2024-05-14 23:00:51.011966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.716 [2024-05-14 23:00:51.043851] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:38.716 [2024-05-14 23:00:51.044016] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:38.716 [2024-05-14 23:00:51.044248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=78988 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 78988 /var/tmp/bdevperf.sock 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 78988 ']' 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:39.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:39.651 23:00:51 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:13:39.651 "subsystems": [ 00:13:39.651 { 00:13:39.651 "subsystem": "keyring", 00:13:39.651 "config": [ 00:13:39.651 { 00:13:39.651 "method": "keyring_file_add_key", 00:13:39.651 "params": { 00:13:39.651 "name": "key0", 00:13:39.651 "path": "/tmp/tmp.iCkTIHcImr" 00:13:39.651 } 00:13:39.651 } 00:13:39.651 ] 00:13:39.651 }, 00:13:39.651 { 00:13:39.651 "subsystem": "iobuf", 00:13:39.651 "config": [ 00:13:39.651 { 00:13:39.651 "method": "iobuf_set_options", 00:13:39.651 "params": { 00:13:39.651 "large_bufsize": 135168, 00:13:39.651 "large_pool_count": 1024, 00:13:39.651 "small_bufsize": 8192, 00:13:39.651 "small_pool_count": 8192 00:13:39.651 } 00:13:39.651 } 00:13:39.651 ] 00:13:39.651 }, 00:13:39.651 { 00:13:39.651 "subsystem": "sock", 00:13:39.651 "config": [ 00:13:39.651 { 00:13:39.651 "method": "sock_impl_set_options", 00:13:39.651 "params": { 00:13:39.651 "enable_ktls": false, 00:13:39.651 "enable_placement_id": 0, 00:13:39.651 "enable_quickack": false, 00:13:39.651 "enable_recv_pipe": true, 00:13:39.651 "enable_zerocopy_send_client": false, 00:13:39.651 "enable_zerocopy_send_server": true, 00:13:39.651 "impl_name": "posix", 00:13:39.651 "recv_buf_size": 2097152, 00:13:39.651 "send_buf_size": 2097152, 00:13:39.651 "tls_version": 0, 00:13:39.651 "zerocopy_threshold": 0 00:13:39.651 } 00:13:39.651 }, 00:13:39.651 { 00:13:39.651 "method": "sock_impl_set_options", 00:13:39.651 "params": { 00:13:39.651 "enable_ktls": false, 00:13:39.651 "enable_placement_id": 0, 00:13:39.651 "enable_quickack": false, 00:13:39.651 "enable_recv_pipe": true, 00:13:39.651 "enable_zerocopy_send_client": false, 00:13:39.651 "enable_zerocopy_send_server": true, 00:13:39.651 "impl_name": "ssl", 00:13:39.651 "recv_buf_size": 4096, 00:13:39.651 "send_buf_size": 4096, 00:13:39.651 "tls_version": 0, 00:13:39.651 "zerocopy_threshold": 0 00:13:39.651 } 00:13:39.651 } 00:13:39.651 ] 00:13:39.651 }, 00:13:39.651 { 00:13:39.651 "subsystem": "vmd", 00:13:39.651 "config": [] 00:13:39.651 }, 00:13:39.651 { 00:13:39.651 "subsystem": "accel", 00:13:39.651 "config": [ 00:13:39.651 { 00:13:39.651 "method": "accel_set_options", 00:13:39.651 "params": { 00:13:39.651 "buf_count": 2048, 00:13:39.651 "large_cache_size": 16, 00:13:39.651 "sequence_count": 2048, 00:13:39.651 "small_cache_size": 128, 00:13:39.651 "task_count": 2048 00:13:39.651 } 00:13:39.651 } 00:13:39.651 ] 00:13:39.651 }, 00:13:39.651 { 00:13:39.651 "subsystem": "bdev", 00:13:39.651 "config": [ 00:13:39.651 { 00:13:39.651 "method": "bdev_set_options", 00:13:39.651 "params": { 00:13:39.651 "bdev_auto_examine": true, 00:13:39.651 "bdev_io_cache_size": 256, 00:13:39.651 "bdev_io_pool_size": 65535, 00:13:39.651 "iobuf_large_cache_size": 16, 00:13:39.651 "iobuf_small_cache_size": 128 00:13:39.651 } 00:13:39.651 }, 00:13:39.651 { 00:13:39.651 "method": "bdev_raid_set_options", 00:13:39.651 "params": { 00:13:39.651 "process_window_size_kb": 1024 00:13:39.651 } 00:13:39.651 }, 00:13:39.651 { 00:13:39.651 "method": "bdev_iscsi_set_options", 00:13:39.651 "params": { 00:13:39.651 "timeout_sec": 30 00:13:39.651 } 00:13:39.651 }, 00:13:39.651 { 00:13:39.651 "method": "bdev_nvme_set_options", 00:13:39.651 "params": { 00:13:39.651 "action_on_timeout": "none", 00:13:39.651 "allow_accel_sequence": false, 00:13:39.651 "arbitration_burst": 0, 00:13:39.651 "bdev_retry_count": 3, 00:13:39.651 "ctrlr_loss_timeout_sec": 0, 00:13:39.651 "delay_cmd_submit": true, 00:13:39.651 "dhchap_dhgroups": [ 00:13:39.651 "null", 00:13:39.651 "ffdhe2048", 00:13:39.651 "ffdhe3072", 00:13:39.651 "ffdhe4096", 00:13:39.651 "ffdhe6144", 00:13:39.651 "ffdhe8192" 00:13:39.651 ], 00:13:39.651 "dhchap_digests": [ 00:13:39.651 "sha256", 00:13:39.651 "sha384", 00:13:39.651 "sha512" 00:13:39.651 ], 00:13:39.651 "disable_auto_failback": false, 00:13:39.651 "fast_io_fail_timeout_sec": 0, 00:13:39.651 "generate_uuids": false, 00:13:39.651 "high_priority_weight": 0, 00:13:39.651 "io_path_stat": false, 00:13:39.651 "io_queue_requests": 512, 00:13:39.651 "keep_alive_timeout_ms": 10000, 00:13:39.651 "low_priority_weight": 0, 00:13:39.651 "medium_priority_weight": 0, 00:13:39.651 "nvme_adminq_poll_period_us": 10000, 00:13:39.651 "nvme_error_stat": false, 00:13:39.651 "nvme_ioq_poll_period_us": 0, 00:13:39.651 "rdma_cm_event_timeout_ms": 0, 00:13:39.651 "rdma_max_cq_size": 0, 00:13:39.651 "rdma_srq_size": 0, 00:13:39.651 "reconnect_delay_sec": 0, 00:13:39.651 "timeout_admin_us": 0, 00:13:39.652 "timeout_us": 0, 00:13:39.652 "transport_ack_timeout": 0, 00:13:39.652 "transport_retry_count": 4, 00:13:39.652 "transport_tos": 0 00:13:39.652 } 00:13:39.652 }, 00:13:39.652 { 00:13:39.652 "method": "bdev_nvme_attach_controller", 00:13:39.652 "params": { 00:13:39.652 "adrfam": "IPv4", 00:13:39.652 "ctrlr_loss_timeout_sec": 0, 00:13:39.652 "ddgst": false, 00:13:39.652 "fast_io_fail_timeout_sec": 0, 00:13:39.652 "hdgst": false, 00:13:39.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:39.652 "name": "nvme0", 00:13:39.652 "prchk_guard": false, 00:13:39.652 "prchk_reftag": false, 00:13:39.652 "psk": "key0", 00:13:39.652 "reconnect_delay_sec": 0, 00:13:39.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.652 "traddr": "10.0.0.2", 00:13:39.652 "trsvcid": "4420", 00:13:39.652 "trtype": "TCP" 00:13:39.652 } 00:13:39.652 }, 00:13:39.652 { 00:13:39.652 "method": "bdev_nvme_set_hotplug", 00:13:39.652 "params": { 00:13:39.652 "enable": false, 00:13:39.652 "period_us": 100000 00:13:39.652 } 00:13:39.652 }, 00:13:39.652 { 00:13:39.652 "method": "bdev_enable_histogram", 00:13:39.652 "params": { 00:13:39.652 "enable": true, 00:13:39.652 "name": "nvme0n1" 00:13:39.652 } 00:13:39.652 }, 00:13:39.652 { 00:13:39.652 "method": "bdev_wait_for_examine" 00:13:39.652 } 00:13:39.652 ] 00:13:39.652 }, 00:13:39.652 { 00:13:39.652 "subsystem": "nbd", 00:13:39.652 "config": [] 00:13:39.652 } 00:13:39.652 ] 00:13:39.652 }' 00:13:39.652 [2024-05-14 23:00:51.777813] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:39.652 [2024-05-14 23:00:51.777938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78988 ] 00:13:39.652 [2024-05-14 23:00:51.924625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.652 [2024-05-14 23:00:51.999379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.911 [2024-05-14 23:00:52.131513] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:40.477 23:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:40.477 23:00:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:40.477 23:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:40.477 23:00:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:13:41.041 23:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.041 23:00:53 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:41.041 Running I/O for 1 seconds... 00:13:41.974 00:13:41.974 Latency(us) 00:13:41.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.974 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:41.974 Verification LBA range: start 0x0 length 0x2000 00:13:41.974 nvme0n1 : 1.03 3562.50 13.92 0.00 0.00 35383.57 7864.32 21328.99 00:13:41.974 =================================================================================================================== 00:13:41.974 Total : 3562.50 13.92 0.00 0.00 35383.57 7864.32 21328.99 00:13:41.974 0 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:13:41.974 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:42.234 nvmf_trace.0 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 78988 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78988 ']' 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78988 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78988 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:42.234 killing process with pid 78988 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78988' 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78988 00:13:42.234 Received shutdown signal, test time was about 1.000000 seconds 00:13:42.234 00:13:42.234 Latency(us) 00:13:42.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.234 =================================================================================================================== 00:13:42.234 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.234 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78988 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.492 rmmod nvme_tcp 00:13:42.492 rmmod nvme_fabrics 00:13:42.492 rmmod nvme_keyring 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 78944 ']' 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 78944 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 78944 ']' 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 78944 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78944 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:42.492 killing process with pid 78944 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78944' 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 78944 00:13:42.492 [2024-05-14 23:00:54.780107] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:42.492 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 78944 00:13:42.750 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.750 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.750 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.750 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.750 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.750 23:00:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.750 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.750 23:00:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.750 23:00:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:42.750 23:00:55 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.XF4ctfSlZ5 /tmp/tmp.dqM0o0Y4Tm /tmp/tmp.iCkTIHcImr 00:13:42.750 00:13:42.750 real 1m20.593s 00:13:42.750 user 2m6.854s 00:13:42.750 sys 0m26.698s 00:13:42.750 23:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:42.750 23:00:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.750 ************************************ 00:13:42.750 END TEST nvmf_tls 00:13:42.750 ************************************ 00:13:42.750 23:00:55 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:42.750 23:00:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:42.750 23:00:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:42.750 23:00:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:42.750 ************************************ 00:13:42.750 START TEST nvmf_fips 00:13:42.750 ************************************ 00:13:42.750 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:42.750 * Looking for test storage... 00:13:43.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:43.009 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:13:43.010 Error setting digest 00:13:43.010 00B2FB7F427F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:13:43.010 00B2FB7F427F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:43.010 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:43.269 Cannot find device "nvmf_tgt_br" 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.269 Cannot find device "nvmf_tgt_br2" 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:43.269 Cannot find device "nvmf_tgt_br" 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:43.269 Cannot find device "nvmf_tgt_br2" 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:43.269 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:43.270 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:43.270 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:43.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:13:43.577 00:13:43.577 --- 10.0.0.2 ping statistics --- 00:13:43.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.577 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:43.577 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:43.577 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:13:43.577 00:13:43.577 --- 10.0.0.3 ping statistics --- 00:13:43.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.577 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:43.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:43.577 00:13:43.577 --- 10.0.0.1 ping statistics --- 00:13:43.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.577 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.577 23:00:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=79268 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 79268 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 79268 ']' 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:43.578 23:00:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:43.578 [2024-05-14 23:00:55.821197] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:43.578 [2024-05-14 23:00:55.821734] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.836 [2024-05-14 23:00:55.953559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.836 [2024-05-14 23:00:56.011522] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.836 [2024-05-14 23:00:56.011572] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.836 [2024-05-14 23:00:56.011585] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.836 [2024-05-14 23:00:56.011593] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.836 [2024-05-14 23:00:56.011601] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.836 [2024-05-14 23:00:56.011631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:44.771 23:00:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:44.771 [2024-05-14 23:00:57.072246] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.771 [2024-05-14 23:00:57.088165] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:44.771 [2024-05-14 23:00:57.088234] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:44.771 [2024-05-14 23:00:57.088399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.771 [2024-05-14 23:00:57.114884] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:44.771 malloc0 00:13:44.771 23:00:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:44.771 23:00:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:44.771 23:00:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=79331 00:13:44.771 23:00:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 79331 /var/tmp/bdevperf.sock 00:13:44.771 23:00:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 79331 ']' 00:13:44.771 23:00:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:44.771 23:00:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:44.771 23:00:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:44.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:44.771 23:00:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:44.771 23:00:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:45.031 [2024-05-14 23:00:57.211453] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:45.031 [2024-05-14 23:00:57.211537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79331 ] 00:13:45.031 [2024-05-14 23:00:57.347881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.031 [2024-05-14 23:00:57.405676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.288 23:00:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:45.288 23:00:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:13:45.288 23:00:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:45.546 [2024-05-14 23:00:57.719395] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:45.546 [2024-05-14 23:00:57.719503] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:45.546 TLSTESTn1 00:13:45.546 23:00:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:45.546 Running I/O for 10 seconds... 00:13:57.751 00:13:57.751 Latency(us) 00:13:57.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.751 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:57.751 Verification LBA range: start 0x0 length 0x2000 00:13:57.751 TLSTESTn1 : 10.02 3883.70 15.17 0.00 0.00 32893.14 7298.33 27167.65 00:13:57.751 =================================================================================================================== 00:13:57.751 Total : 3883.70 15.17 0.00 0.00 32893.14 7298.33 27167.65 00:13:57.751 0 00:13:57.751 23:01:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:13:57.751 23:01:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:13:57.751 23:01:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:13:57.751 23:01:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:13:57.751 23:01:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:13:57.751 23:01:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:57.751 23:01:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:13:57.751 23:01:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:13:57.751 23:01:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:13:57.751 23:01:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:57.751 nvmf_trace.0 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 79331 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 79331 ']' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 79331 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79331 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79331' 00:13:57.751 killing process with pid 79331 00:13:57.751 Received shutdown signal, test time was about 10.000000 seconds 00:13:57.751 00:13:57.751 Latency(us) 00:13:57.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.751 =================================================================================================================== 00:13:57.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 79331 00:13:57.751 [2024-05-14 23:01:08.080986] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 79331 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:57.751 rmmod nvme_tcp 00:13:57.751 rmmod nvme_fabrics 00:13:57.751 rmmod nvme_keyring 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 79268 ']' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 79268 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 79268 ']' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 79268 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79268 00:13:57.751 killing process with pid 79268 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79268' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 79268 00:13:57.751 [2024-05-14 23:01:08.359571] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:57.751 [2024-05-14 23:01:08.359610] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 79268 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:57.751 00:13:57.751 real 0m13.524s 00:13:57.751 user 0m17.949s 00:13:57.751 sys 0m5.510s 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:57.751 23:01:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 ************************************ 00:13:57.751 END TEST nvmf_fips 00:13:57.751 ************************************ 00:13:57.751 23:01:08 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:13:57.751 23:01:08 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:13:57.751 23:01:08 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:13:57.751 23:01:08 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.751 23:01:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 23:01:08 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:13:57.751 23:01:08 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:57.751 23:01:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 23:01:08 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:13:57.751 23:01:08 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:13:57.751 23:01:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:57.751 23:01:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:57.751 23:01:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 ************************************ 00:13:57.751 START TEST nvmf_multicontroller 00:13:57.751 ************************************ 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:13:57.751 * Looking for test storage... 00:13:57.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.751 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:57.752 Cannot find device "nvmf_tgt_br" 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:57.752 Cannot find device "nvmf_tgt_br2" 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:57.752 Cannot find device "nvmf_tgt_br" 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:57.752 Cannot find device "nvmf_tgt_br2" 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:57.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:57.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:57.752 23:01:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:57.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:13:57.752 00:13:57.752 --- 10.0.0.2 ping statistics --- 00:13:57.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.752 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:57.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:57.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:13:57.752 00:13:57.752 --- 10.0.0.3 ping statistics --- 00:13:57.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.752 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:57.752 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:57.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:57.752 00:13:57.752 --- 10.0.0.1 ping statistics --- 00:13:57.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.753 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=79680 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 79680 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 79680 ']' 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:57.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 [2024-05-14 23:01:09.252473] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:57.753 [2024-05-14 23:01:09.252784] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.753 [2024-05-14 23:01:09.390129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:57.753 [2024-05-14 23:01:09.449045] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.753 [2024-05-14 23:01:09.449107] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.753 [2024-05-14 23:01:09.449120] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.753 [2024-05-14 23:01:09.449128] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.753 [2024-05-14 23:01:09.449136] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.753 [2024-05-14 23:01:09.449267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.753 [2024-05-14 23:01:09.449755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.753 [2024-05-14 23:01:09.449797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 [2024-05-14 23:01:09.577010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 Malloc0 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 [2024-05-14 23:01:09.642398] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:57.753 [2024-05-14 23:01:09.642856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 [2024-05-14 23:01:09.650512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 Malloc1 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=79718 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 79718 /var/tmp/bdevperf.sock 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 79718 ']' 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:57.753 23:01:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.380 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:58.380 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:13:58.380 23:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:13:58.380 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.380 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.639 NVMe0n1 00:13:58.639 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.639 23:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:58.639 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.640 1 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.640 2024/05/14 23:01:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:13:58.640 request: 00:13:58.640 { 00:13:58.640 "method": "bdev_nvme_attach_controller", 00:13:58.640 "params": { 00:13:58.640 "name": "NVMe0", 00:13:58.640 "trtype": "tcp", 00:13:58.640 "traddr": "10.0.0.2", 00:13:58.640 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:13:58.640 "hostaddr": "10.0.0.2", 00:13:58.640 "hostsvcid": "60000", 00:13:58.640 "adrfam": "ipv4", 00:13:58.640 "trsvcid": "4420", 00:13:58.640 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:13:58.640 } 00:13:58.640 } 00:13:58.640 Got JSON-RPC error response 00:13:58.640 GoRPCClient: error on JSON-RPC call 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.640 2024/05/14 23:01:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:13:58.640 request: 00:13:58.640 { 00:13:58.640 "method": "bdev_nvme_attach_controller", 00:13:58.640 "params": { 00:13:58.640 "name": "NVMe0", 00:13:58.640 "trtype": "tcp", 00:13:58.640 "traddr": "10.0.0.2", 00:13:58.640 "hostaddr": "10.0.0.2", 00:13:58.640 "hostsvcid": "60000", 00:13:58.640 "adrfam": "ipv4", 00:13:58.640 "trsvcid": "4420", 00:13:58.640 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:13:58.640 } 00:13:58.640 } 00:13:58.640 Got JSON-RPC error response 00:13:58.640 GoRPCClient: error on JSON-RPC call 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.640 2024/05/14 23:01:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:13:58.640 request: 00:13:58.640 { 00:13:58.640 "method": "bdev_nvme_attach_controller", 00:13:58.640 "params": { 00:13:58.640 "name": "NVMe0", 00:13:58.640 "trtype": "tcp", 00:13:58.640 "traddr": "10.0.0.2", 00:13:58.640 "hostaddr": "10.0.0.2", 00:13:58.640 "hostsvcid": "60000", 00:13:58.640 "adrfam": "ipv4", 00:13:58.640 "trsvcid": "4420", 00:13:58.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.640 "multipath": "disable" 00:13:58.640 } 00:13:58.640 } 00:13:58.640 Got JSON-RPC error response 00:13:58.640 GoRPCClient: error on JSON-RPC call 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.640 2024/05/14 23:01:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:13:58.640 request: 00:13:58.640 { 00:13:58.640 "method": "bdev_nvme_attach_controller", 00:13:58.640 "params": { 00:13:58.640 "name": "NVMe0", 00:13:58.640 "trtype": "tcp", 00:13:58.640 "traddr": "10.0.0.2", 00:13:58.640 "hostaddr": "10.0.0.2", 00:13:58.640 "hostsvcid": "60000", 00:13:58.640 "adrfam": "ipv4", 00:13:58.640 "trsvcid": "4420", 00:13:58.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.640 "multipath": "failover" 00:13:58.640 } 00:13:58.640 } 00:13:58.640 Got JSON-RPC error response 00:13:58.640 GoRPCClient: error on JSON-RPC call 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.640 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.641 00:13:58.641 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.641 23:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:58.641 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.641 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.641 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.641 23:01:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:13:58.641 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.641 23:01:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.899 00:13:58.899 23:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.899 23:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:58.899 23:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.899 23:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:13:58.899 23:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:58.899 23:01:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.899 23:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:13:58.899 23:01:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:59.834 0 00:13:59.834 23:01:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:13:59.834 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.834 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:13:59.834 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.834 23:01:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 79718 00:13:59.834 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 79718 ']' 00:13:59.834 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 79718 00:13:59.834 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:13:59.834 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:59.834 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79718 00:14:00.093 killing process with pid 79718 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79718' 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 79718 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 79718 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:14:00.093 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:14:00.094 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:14:00.094 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:14:00.094 [2024-05-14 23:01:09.754786] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:00.094 [2024-05-14 23:01:09.755422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79718 ] 00:14:00.094 [2024-05-14 23:01:09.894265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.094 [2024-05-14 23:01:09.964397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.094 [2024-05-14 23:01:11.049735] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 150d2add-c013-492d-9acb-0ac78019d9dc already exists 00:14:00.094 [2024-05-14 23:01:11.049808] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:150d2add-c013-492d-9acb-0ac78019d9dc alias for bdev NVMe1n1 00:14:00.094 [2024-05-14 23:01:11.049829] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:14:00.094 Running I/O for 1 seconds... 00:14:00.094 00:14:00.094 Latency(us) 00:14:00.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.094 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:14:00.094 NVMe0n1 : 1.01 17938.26 70.07 0.00 0.00 7112.98 3768.32 16681.89 00:14:00.094 =================================================================================================================== 00:14:00.094 Total : 17938.26 70.07 0.00 0.00 7112.98 3768.32 16681.89 00:14:00.094 Received shutdown signal, test time was about 1.000000 seconds 00:14:00.094 00:14:00.094 Latency(us) 00:14:00.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.094 =================================================================================================================== 00:14:00.094 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:00.094 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:14:00.094 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:00.094 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:14:00.094 23:01:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:14:00.094 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.094 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.352 rmmod nvme_tcp 00:14:00.352 rmmod nvme_fabrics 00:14:00.352 rmmod nvme_keyring 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 79680 ']' 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 79680 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 79680 ']' 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 79680 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79680 00:14:00.352 killing process with pid 79680 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79680' 00:14:00.352 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 79680 00:14:00.353 [2024-05-14 23:01:12.567691] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:00.353 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 79680 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:00.610 00:14:00.610 real 0m4.129s 00:14:00.610 user 0m12.984s 00:14:00.610 sys 0m0.905s 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:00.610 ************************************ 00:14:00.610 END TEST nvmf_multicontroller 00:14:00.610 23:01:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:14:00.610 ************************************ 00:14:00.610 23:01:12 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:14:00.610 23:01:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:00.610 23:01:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:00.610 23:01:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.610 ************************************ 00:14:00.610 START TEST nvmf_aer 00:14:00.610 ************************************ 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:14:00.610 * Looking for test storage... 00:14:00.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:00.610 23:01:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:00.868 Cannot find device "nvmf_tgt_br" 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.868 Cannot find device "nvmf_tgt_br2" 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:00.868 Cannot find device "nvmf_tgt_br" 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:00.868 Cannot find device "nvmf_tgt_br2" 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.868 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.869 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.869 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:00.869 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:00.869 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.869 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.869 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:01.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:14:01.127 00:14:01.127 --- 10.0.0.2 ping statistics --- 00:14:01.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.127 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:01.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:01.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:14:01.127 00:14:01.127 --- 10.0.0.3 ping statistics --- 00:14:01.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.127 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:01.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:01.127 00:14:01.127 --- 10.0.0.1 ping statistics --- 00:14:01.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.127 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=79968 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 79968 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 79968 ']' 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:01.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:01.127 23:01:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:01.127 [2024-05-14 23:01:13.366468] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:01.127 [2024-05-14 23:01:13.366555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.127 [2024-05-14 23:01:13.504840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.385 [2024-05-14 23:01:13.576119] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.385 [2024-05-14 23:01:13.576179] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.385 [2024-05-14 23:01:13.576194] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.385 [2024-05-14 23:01:13.576204] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.385 [2024-05-14 23:01:13.576213] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.385 [2024-05-14 23:01:13.576839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.385 [2024-05-14 23:01:13.576915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.385 [2024-05-14 23:01:13.576985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.385 [2024-05-14 23:01:13.576992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.319 [2024-05-14 23:01:14.437962] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.319 Malloc0 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.319 [2024-05-14 23:01:14.494891] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:02.319 [2024-05-14 23:01:14.495375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.319 [ 00:14:02.319 { 00:14:02.319 "allow_any_host": true, 00:14:02.319 "hosts": [], 00:14:02.319 "listen_addresses": [], 00:14:02.319 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:02.319 "subtype": "Discovery" 00:14:02.319 }, 00:14:02.319 { 00:14:02.319 "allow_any_host": true, 00:14:02.319 "hosts": [], 00:14:02.319 "listen_addresses": [ 00:14:02.319 { 00:14:02.319 "adrfam": "IPv4", 00:14:02.319 "traddr": "10.0.0.2", 00:14:02.319 "trsvcid": "4420", 00:14:02.319 "trtype": "TCP" 00:14:02.319 } 00:14:02.319 ], 00:14:02.319 "max_cntlid": 65519, 00:14:02.319 "max_namespaces": 2, 00:14:02.319 "min_cntlid": 1, 00:14:02.319 "model_number": "SPDK bdev Controller", 00:14:02.319 "namespaces": [ 00:14:02.319 { 00:14:02.319 "bdev_name": "Malloc0", 00:14:02.319 "name": "Malloc0", 00:14:02.319 "nguid": "515FA96AB153493D8085959B359B7EE5", 00:14:02.319 "nsid": 1, 00:14:02.319 "uuid": "515fa96a-b153-493d-8085-959b359b7ee5" 00:14:02.319 } 00:14:02.319 ], 00:14:02.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.319 "serial_number": "SPDK00000000000001", 00:14:02.319 "subtype": "NVMe" 00:14:02.319 } 00:14:02.319 ] 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=80022 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:14:02.319 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:14:02.320 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:02.320 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:14:02.320 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:14:02.320 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:14:02.320 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:02.320 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:14:02.320 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:14:02.320 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.578 Malloc1 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.578 Asynchronous Event Request test 00:14:02.578 Attaching to 10.0.0.2 00:14:02.578 Attached to 10.0.0.2 00:14:02.578 Registering asynchronous event callbacks... 00:14:02.578 Starting namespace attribute notice tests for all controllers... 00:14:02.578 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:02.578 aer_cb - Changed Namespace 00:14:02.578 Cleaning up... 00:14:02.578 [ 00:14:02.578 { 00:14:02.578 "allow_any_host": true, 00:14:02.578 "hosts": [], 00:14:02.578 "listen_addresses": [], 00:14:02.578 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:02.578 "subtype": "Discovery" 00:14:02.578 }, 00:14:02.578 { 00:14:02.578 "allow_any_host": true, 00:14:02.578 "hosts": [], 00:14:02.578 "listen_addresses": [ 00:14:02.578 { 00:14:02.578 "adrfam": "IPv4", 00:14:02.578 "traddr": "10.0.0.2", 00:14:02.578 "trsvcid": "4420", 00:14:02.578 "trtype": "TCP" 00:14:02.578 } 00:14:02.578 ], 00:14:02.578 "max_cntlid": 65519, 00:14:02.578 "max_namespaces": 2, 00:14:02.578 "min_cntlid": 1, 00:14:02.578 "model_number": "SPDK bdev Controller", 00:14:02.578 "namespaces": [ 00:14:02.578 { 00:14:02.578 "bdev_name": "Malloc0", 00:14:02.578 "name": "Malloc0", 00:14:02.578 "nguid": "515FA96AB153493D8085959B359B7EE5", 00:14:02.578 "nsid": 1, 00:14:02.578 "uuid": "515fa96a-b153-493d-8085-959b359b7ee5" 00:14:02.578 }, 00:14:02.578 { 00:14:02.578 "bdev_name": "Malloc1", 00:14:02.578 "name": "Malloc1", 00:14:02.578 "nguid": "D3ACA6A55AD24CBFBAA9FA10D21DBC87", 00:14:02.578 "nsid": 2, 00:14:02.578 "uuid": "d3aca6a5-5ad2-4cbf-baa9-fa10d21dbc87" 00:14:02.578 } 00:14:02.578 ], 00:14:02.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.578 "serial_number": "SPDK00000000000001", 00:14:02.578 "subtype": "NVMe" 00:14:02.578 } 00:14:02.578 ] 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 80022 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.578 rmmod nvme_tcp 00:14:02.578 rmmod nvme_fabrics 00:14:02.578 rmmod nvme_keyring 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 79968 ']' 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 79968 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 79968 ']' 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 79968 00:14:02.578 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:14:02.579 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:02.579 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79968 00:14:02.579 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:02.579 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:02.579 killing process with pid 79968 00:14:02.579 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79968' 00:14:02.579 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 79968 00:14:02.579 [2024-05-14 23:01:14.953167] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:02.579 23:01:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 79968 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:02.838 00:14:02.838 real 0m2.315s 00:14:02.838 user 0m6.483s 00:14:02.838 sys 0m0.579s 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:02.838 23:01:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:14:02.838 ************************************ 00:14:02.838 END TEST nvmf_aer 00:14:02.838 ************************************ 00:14:02.838 23:01:15 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:14:02.838 23:01:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:02.838 23:01:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:02.838 23:01:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.098 ************************************ 00:14:03.098 START TEST nvmf_async_init 00:14:03.098 ************************************ 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:14:03.098 * Looking for test storage... 00:14:03.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4d9b28ed37f54f4ab966adbc998f72db 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.098 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:03.099 Cannot find device "nvmf_tgt_br" 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:03.099 Cannot find device "nvmf_tgt_br2" 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:03.099 Cannot find device "nvmf_tgt_br" 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:03.099 Cannot find device "nvmf_tgt_br2" 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:03.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:03.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:03.099 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:03.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:14:03.358 00:14:03.358 --- 10.0.0.2 ping statistics --- 00:14:03.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.358 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:03.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:03.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:14:03.358 00:14:03.358 --- 10.0.0.3 ping statistics --- 00:14:03.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.358 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:03.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:03.358 00:14:03.358 --- 10.0.0.1 ping statistics --- 00:14:03.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.358 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:03.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=80191 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 80191 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 80191 ']' 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:03.358 23:01:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:03.615 [2024-05-14 23:01:15.774034] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:03.615 [2024-05-14 23:01:15.774124] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.615 [2024-05-14 23:01:15.913063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.615 [2024-05-14 23:01:15.979947] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.615 [2024-05-14 23:01:15.980004] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.615 [2024-05-14 23:01:15.980019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.615 [2024-05-14 23:01:15.980029] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.615 [2024-05-14 23:01:15.980038] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.615 [2024-05-14 23:01:15.980073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:04.597 [2024-05-14 23:01:16.840418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:04.597 null0 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4d9b28ed37f54f4ab966adbc998f72db 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:04.597 [2024-05-14 23:01:16.888353] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:04.597 [2024-05-14 23:01:16.888590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.597 23:01:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:04.856 nvme0n1 00:14:04.856 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.856 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:14:04.856 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.856 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:04.856 [ 00:14:04.856 { 00:14:04.856 "aliases": [ 00:14:04.856 "4d9b28ed-37f5-4f4a-b966-adbc998f72db" 00:14:04.856 ], 00:14:04.856 "assigned_rate_limits": { 00:14:04.856 "r_mbytes_per_sec": 0, 00:14:04.856 "rw_ios_per_sec": 0, 00:14:04.856 "rw_mbytes_per_sec": 0, 00:14:04.856 "w_mbytes_per_sec": 0 00:14:04.856 }, 00:14:04.856 "block_size": 512, 00:14:04.856 "claimed": false, 00:14:04.856 "driver_specific": { 00:14:04.856 "mp_policy": "active_passive", 00:14:04.856 "nvme": [ 00:14:04.856 { 00:14:04.856 "ctrlr_data": { 00:14:04.856 "ana_reporting": false, 00:14:04.856 "cntlid": 1, 00:14:04.856 "firmware_revision": "24.05", 00:14:04.856 "model_number": "SPDK bdev Controller", 00:14:04.856 "multi_ctrlr": true, 00:14:04.856 "oacs": { 00:14:04.856 "firmware": 0, 00:14:04.856 "format": 0, 00:14:04.856 "ns_manage": 0, 00:14:04.856 "security": 0 00:14:04.856 }, 00:14:04.856 "serial_number": "00000000000000000000", 00:14:04.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:04.856 "vendor_id": "0x8086" 00:14:04.856 }, 00:14:04.856 "ns_data": { 00:14:04.856 "can_share": true, 00:14:04.856 "id": 1 00:14:04.856 }, 00:14:04.856 "trid": { 00:14:04.856 "adrfam": "IPv4", 00:14:04.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:04.856 "traddr": "10.0.0.2", 00:14:04.856 "trsvcid": "4420", 00:14:04.856 "trtype": "TCP" 00:14:04.856 }, 00:14:04.856 "vs": { 00:14:04.856 "nvme_version": "1.3" 00:14:04.856 } 00:14:04.856 } 00:14:04.856 ] 00:14:04.856 }, 00:14:04.856 "memory_domains": [ 00:14:04.856 { 00:14:04.856 "dma_device_id": "system", 00:14:04.856 "dma_device_type": 1 00:14:04.856 } 00:14:04.856 ], 00:14:04.856 "name": "nvme0n1", 00:14:04.856 "num_blocks": 2097152, 00:14:04.856 "product_name": "NVMe disk", 00:14:04.856 "supported_io_types": { 00:14:04.856 "abort": true, 00:14:04.856 "compare": true, 00:14:04.856 "compare_and_write": true, 00:14:04.856 "flush": true, 00:14:04.856 "nvme_admin": true, 00:14:04.856 "nvme_io": true, 00:14:04.856 "read": true, 00:14:04.856 "reset": true, 00:14:04.856 "unmap": false, 00:14:04.856 "write": true, 00:14:04.856 "write_zeroes": true 00:14:04.856 }, 00:14:04.856 "uuid": "4d9b28ed-37f5-4f4a-b966-adbc998f72db", 00:14:04.856 "zoned": false 00:14:04.856 } 00:14:04.856 ] 00:14:04.856 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.856 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:14:04.856 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.856 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:04.856 [2024-05-14 23:01:17.164496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:04.856 [2024-05-14 23:01:17.164819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1059eb0 (9): Bad file descriptor 00:14:05.116 [2024-05-14 23:01:17.296970] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:05.116 [ 00:14:05.116 { 00:14:05.116 "aliases": [ 00:14:05.116 "4d9b28ed-37f5-4f4a-b966-adbc998f72db" 00:14:05.116 ], 00:14:05.116 "assigned_rate_limits": { 00:14:05.116 "r_mbytes_per_sec": 0, 00:14:05.116 "rw_ios_per_sec": 0, 00:14:05.116 "rw_mbytes_per_sec": 0, 00:14:05.116 "w_mbytes_per_sec": 0 00:14:05.116 }, 00:14:05.116 "block_size": 512, 00:14:05.116 "claimed": false, 00:14:05.116 "driver_specific": { 00:14:05.116 "mp_policy": "active_passive", 00:14:05.116 "nvme": [ 00:14:05.116 { 00:14:05.116 "ctrlr_data": { 00:14:05.116 "ana_reporting": false, 00:14:05.116 "cntlid": 2, 00:14:05.116 "firmware_revision": "24.05", 00:14:05.116 "model_number": "SPDK bdev Controller", 00:14:05.116 "multi_ctrlr": true, 00:14:05.116 "oacs": { 00:14:05.116 "firmware": 0, 00:14:05.116 "format": 0, 00:14:05.116 "ns_manage": 0, 00:14:05.116 "security": 0 00:14:05.116 }, 00:14:05.116 "serial_number": "00000000000000000000", 00:14:05.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.116 "vendor_id": "0x8086" 00:14:05.116 }, 00:14:05.116 "ns_data": { 00:14:05.116 "can_share": true, 00:14:05.116 "id": 1 00:14:05.116 }, 00:14:05.116 "trid": { 00:14:05.116 "adrfam": "IPv4", 00:14:05.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.116 "traddr": "10.0.0.2", 00:14:05.116 "trsvcid": "4420", 00:14:05.116 "trtype": "TCP" 00:14:05.116 }, 00:14:05.116 "vs": { 00:14:05.116 "nvme_version": "1.3" 00:14:05.116 } 00:14:05.116 } 00:14:05.116 ] 00:14:05.116 }, 00:14:05.116 "memory_domains": [ 00:14:05.116 { 00:14:05.116 "dma_device_id": "system", 00:14:05.116 "dma_device_type": 1 00:14:05.116 } 00:14:05.116 ], 00:14:05.116 "name": "nvme0n1", 00:14:05.116 "num_blocks": 2097152, 00:14:05.116 "product_name": "NVMe disk", 00:14:05.116 "supported_io_types": { 00:14:05.116 "abort": true, 00:14:05.116 "compare": true, 00:14:05.116 "compare_and_write": true, 00:14:05.116 "flush": true, 00:14:05.116 "nvme_admin": true, 00:14:05.116 "nvme_io": true, 00:14:05.116 "read": true, 00:14:05.116 "reset": true, 00:14:05.116 "unmap": false, 00:14:05.116 "write": true, 00:14:05.116 "write_zeroes": true 00:14:05.116 }, 00:14:05.116 "uuid": "4d9b28ed-37f5-4f4a-b966-adbc998f72db", 00:14:05.116 "zoned": false 00:14:05.116 } 00:14:05.116 ] 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.hmR44g5CB3 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:05.116 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.hmR44g5CB3 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:05.117 [2024-05-14 23:01:17.360710] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:05.117 [2024-05-14 23:01:17.360897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hmR44g5CB3 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:05.117 [2024-05-14 23:01:17.368680] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hmR44g5CB3 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:05.117 [2024-05-14 23:01:17.376680] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:05.117 [2024-05-14 23:01:17.376744] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:05.117 nvme0n1 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:05.117 [ 00:14:05.117 { 00:14:05.117 "aliases": [ 00:14:05.117 "4d9b28ed-37f5-4f4a-b966-adbc998f72db" 00:14:05.117 ], 00:14:05.117 "assigned_rate_limits": { 00:14:05.117 "r_mbytes_per_sec": 0, 00:14:05.117 "rw_ios_per_sec": 0, 00:14:05.117 "rw_mbytes_per_sec": 0, 00:14:05.117 "w_mbytes_per_sec": 0 00:14:05.117 }, 00:14:05.117 "block_size": 512, 00:14:05.117 "claimed": false, 00:14:05.117 "driver_specific": { 00:14:05.117 "mp_policy": "active_passive", 00:14:05.117 "nvme": [ 00:14:05.117 { 00:14:05.117 "ctrlr_data": { 00:14:05.117 "ana_reporting": false, 00:14:05.117 "cntlid": 3, 00:14:05.117 "firmware_revision": "24.05", 00:14:05.117 "model_number": "SPDK bdev Controller", 00:14:05.117 "multi_ctrlr": true, 00:14:05.117 "oacs": { 00:14:05.117 "firmware": 0, 00:14:05.117 "format": 0, 00:14:05.117 "ns_manage": 0, 00:14:05.117 "security": 0 00:14:05.117 }, 00:14:05.117 "serial_number": "00000000000000000000", 00:14:05.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.117 "vendor_id": "0x8086" 00:14:05.117 }, 00:14:05.117 "ns_data": { 00:14:05.117 "can_share": true, 00:14:05.117 "id": 1 00:14:05.117 }, 00:14:05.117 "trid": { 00:14:05.117 "adrfam": "IPv4", 00:14:05.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.117 "traddr": "10.0.0.2", 00:14:05.117 "trsvcid": "4421", 00:14:05.117 "trtype": "TCP" 00:14:05.117 }, 00:14:05.117 "vs": { 00:14:05.117 "nvme_version": "1.3" 00:14:05.117 } 00:14:05.117 } 00:14:05.117 ] 00:14:05.117 }, 00:14:05.117 "memory_domains": [ 00:14:05.117 { 00:14:05.117 "dma_device_id": "system", 00:14:05.117 "dma_device_type": 1 00:14:05.117 } 00:14:05.117 ], 00:14:05.117 "name": "nvme0n1", 00:14:05.117 "num_blocks": 2097152, 00:14:05.117 "product_name": "NVMe disk", 00:14:05.117 "supported_io_types": { 00:14:05.117 "abort": true, 00:14:05.117 "compare": true, 00:14:05.117 "compare_and_write": true, 00:14:05.117 "flush": true, 00:14:05.117 "nvme_admin": true, 00:14:05.117 "nvme_io": true, 00:14:05.117 "read": true, 00:14:05.117 "reset": true, 00:14:05.117 "unmap": false, 00:14:05.117 "write": true, 00:14:05.117 "write_zeroes": true 00:14:05.117 }, 00:14:05.117 "uuid": "4d9b28ed-37f5-4f4a-b966-adbc998f72db", 00:14:05.117 "zoned": false 00:14:05.117 } 00:14:05.117 ] 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.hmR44g5CB3 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.117 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.375 rmmod nvme_tcp 00:14:05.375 rmmod nvme_fabrics 00:14:05.375 rmmod nvme_keyring 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 80191 ']' 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 80191 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 80191 ']' 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 80191 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:05.375 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80191 00:14:05.376 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:05.376 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:05.376 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80191' 00:14:05.376 killing process with pid 80191 00:14:05.376 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 80191 00:14:05.376 [2024-05-14 23:01:17.631926] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:05.376 [2024-05-14 23:01:17.631964] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:05.376 [2024-05-14 23:01:17.631976] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:05.376 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 80191 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:05.633 00:14:05.633 real 0m2.609s 00:14:05.633 user 0m2.538s 00:14:05.633 sys 0m0.543s 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:05.633 23:01:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:14:05.633 ************************************ 00:14:05.633 END TEST nvmf_async_init 00:14:05.633 ************************************ 00:14:05.633 23:01:17 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:14:05.633 23:01:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:05.633 23:01:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:05.633 23:01:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:05.633 ************************************ 00:14:05.633 START TEST dma 00:14:05.633 ************************************ 00:14:05.633 23:01:17 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:14:05.633 * Looking for test storage... 00:14:05.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:05.633 23:01:17 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.633 23:01:17 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.633 23:01:17 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.633 23:01:17 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.633 23:01:17 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.633 23:01:17 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.633 23:01:17 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.633 23:01:17 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:14:05.633 23:01:17 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.633 23:01:17 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.633 23:01:17 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:14:05.633 23:01:17 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:14:05.633 00:14:05.633 real 0m0.104s 00:14:05.633 user 0m0.049s 00:14:05.633 sys 0m0.062s 00:14:05.633 23:01:17 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:05.633 23:01:17 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:14:05.633 ************************************ 00:14:05.633 END TEST dma 00:14:05.633 ************************************ 00:14:05.892 23:01:18 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:05.892 23:01:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:05.892 23:01:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:05.892 23:01:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:05.892 ************************************ 00:14:05.892 START TEST nvmf_identify 00:14:05.892 ************************************ 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:05.892 * Looking for test storage... 00:14:05.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.892 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:05.893 Cannot find device "nvmf_tgt_br" 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.893 Cannot find device "nvmf_tgt_br2" 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:05.893 Cannot find device "nvmf_tgt_br" 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:05.893 Cannot find device "nvmf_tgt_br2" 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.893 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:06.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:14:06.156 00:14:06.156 --- 10.0.0.2 ping statistics --- 00:14:06.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.156 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:06.156 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.156 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:06.156 00:14:06.156 --- 10.0.0.3 ping statistics --- 00:14:06.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.156 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:14:06.156 00:14:06.156 --- 10.0.0.1 ping statistics --- 00:14:06.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.156 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=80457 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 80457 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 80457 ']' 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:06.156 23:01:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.414 [2024-05-14 23:01:18.604655] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:06.414 [2024-05-14 23:01:18.604741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.414 [2024-05-14 23:01:18.742160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.672 [2024-05-14 23:01:18.815663] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.672 [2024-05-14 23:01:18.815730] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.672 [2024-05-14 23:01:18.815752] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.672 [2024-05-14 23:01:18.815789] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.672 [2024-05-14 23:01:18.815805] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.672 [2024-05-14 23:01:18.816862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.672 [2024-05-14 23:01:18.816910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.672 [2024-05-14 23:01:18.816955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.672 [2024-05-14 23:01:18.816959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.239 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:07.239 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:14:07.239 23:01:19 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.239 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.239 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:07.239 [2024-05-14 23:01:19.617288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:07.496 Malloc0 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:07.496 [2024-05-14 23:01:19.708940] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:07.496 [2024-05-14 23:01:19.709495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:07.496 [ 00:14:07.496 { 00:14:07.496 "allow_any_host": true, 00:14:07.496 "hosts": [], 00:14:07.496 "listen_addresses": [ 00:14:07.496 { 00:14:07.496 "adrfam": "IPv4", 00:14:07.496 "traddr": "10.0.0.2", 00:14:07.496 "trsvcid": "4420", 00:14:07.496 "trtype": "TCP" 00:14:07.496 } 00:14:07.496 ], 00:14:07.496 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:07.496 "subtype": "Discovery" 00:14:07.496 }, 00:14:07.496 { 00:14:07.496 "allow_any_host": true, 00:14:07.496 "hosts": [], 00:14:07.496 "listen_addresses": [ 00:14:07.496 { 00:14:07.496 "adrfam": "IPv4", 00:14:07.496 "traddr": "10.0.0.2", 00:14:07.496 "trsvcid": "4420", 00:14:07.496 "trtype": "TCP" 00:14:07.496 } 00:14:07.496 ], 00:14:07.496 "max_cntlid": 65519, 00:14:07.496 "max_namespaces": 32, 00:14:07.496 "min_cntlid": 1, 00:14:07.496 "model_number": "SPDK bdev Controller", 00:14:07.496 "namespaces": [ 00:14:07.496 { 00:14:07.496 "bdev_name": "Malloc0", 00:14:07.496 "eui64": "ABCDEF0123456789", 00:14:07.496 "name": "Malloc0", 00:14:07.496 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:07.496 "nsid": 1, 00:14:07.496 "uuid": "a8a587c0-e52c-41a1-9c63-9d65d4fcba38" 00:14:07.496 } 00:14:07.496 ], 00:14:07.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.496 "serial_number": "SPDK00000000000001", 00:14:07.496 "subtype": "NVMe" 00:14:07.496 } 00:14:07.496 ] 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.496 23:01:19 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:07.496 [2024-05-14 23:01:19.757737] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:07.496 [2024-05-14 23:01:19.757811] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80510 ] 00:14:07.757 [2024-05-14 23:01:19.900265] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:07.757 [2024-05-14 23:01:19.900338] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:07.757 [2024-05-14 23:01:19.900345] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:07.757 [2024-05-14 23:01:19.900361] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:07.757 [2024-05-14 23:01:19.900375] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:07.757 [2024-05-14 23:01:19.900506] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:07.757 [2024-05-14 23:01:19.900571] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb63280 0 00:14:07.757 [2024-05-14 23:01:19.912783] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:07.757 [2024-05-14 23:01:19.912808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:07.757 [2024-05-14 23:01:19.912814] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:07.757 [2024-05-14 23:01:19.912818] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:07.757 [2024-05-14 23:01:19.912864] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.912871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.912876] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb63280) 00:14:07.757 [2024-05-14 23:01:19.912891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:07.757 [2024-05-14 23:01:19.912923] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab950, cid 0, qid 0 00:14:07.757 [2024-05-14 23:01:19.920789] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.757 [2024-05-14 23:01:19.920811] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.757 [2024-05-14 23:01:19.920816] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.920821] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbab950) on tqpair=0xb63280 00:14:07.757 [2024-05-14 23:01:19.920832] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:07.757 [2024-05-14 23:01:19.920841] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:07.757 [2024-05-14 23:01:19.920848] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:07.757 [2024-05-14 23:01:19.920864] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.920871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.920875] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb63280) 00:14:07.757 [2024-05-14 23:01:19.920885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.757 [2024-05-14 23:01:19.920914] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab950, cid 0, qid 0 00:14:07.757 [2024-05-14 23:01:19.921051] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.757 [2024-05-14 23:01:19.921058] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.757 [2024-05-14 23:01:19.921063] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921067] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbab950) on tqpair=0xb63280 00:14:07.757 [2024-05-14 23:01:19.921074] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:07.757 [2024-05-14 23:01:19.921082] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:07.757 [2024-05-14 23:01:19.921091] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921096] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb63280) 00:14:07.757 [2024-05-14 23:01:19.921109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.757 [2024-05-14 23:01:19.921129] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab950, cid 0, qid 0 00:14:07.757 [2024-05-14 23:01:19.921219] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.757 [2024-05-14 23:01:19.921226] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.757 [2024-05-14 23:01:19.921230] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921234] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbab950) on tqpair=0xb63280 00:14:07.757 [2024-05-14 23:01:19.921241] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:07.757 [2024-05-14 23:01:19.921250] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:07.757 [2024-05-14 23:01:19.921258] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921263] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb63280) 00:14:07.757 [2024-05-14 23:01:19.921275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.757 [2024-05-14 23:01:19.921294] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab950, cid 0, qid 0 00:14:07.757 [2024-05-14 23:01:19.921375] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.757 [2024-05-14 23:01:19.921383] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.757 [2024-05-14 23:01:19.921387] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921392] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbab950) on tqpair=0xb63280 00:14:07.757 [2024-05-14 23:01:19.921398] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:07.757 [2024-05-14 23:01:19.921409] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921414] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921418] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb63280) 00:14:07.757 [2024-05-14 23:01:19.921426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.757 [2024-05-14 23:01:19.921445] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab950, cid 0, qid 0 00:14:07.757 [2024-05-14 23:01:19.921525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.757 [2024-05-14 23:01:19.921532] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.757 [2024-05-14 23:01:19.921536] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921540] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbab950) on tqpair=0xb63280 00:14:07.757 [2024-05-14 23:01:19.921546] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:07.757 [2024-05-14 23:01:19.921552] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:07.757 [2024-05-14 23:01:19.921561] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:07.757 [2024-05-14 23:01:19.921667] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:07.757 [2024-05-14 23:01:19.921674] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:07.757 [2024-05-14 23:01:19.921684] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921689] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921693] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb63280) 00:14:07.757 [2024-05-14 23:01:19.921701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.757 [2024-05-14 23:01:19.921720] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab950, cid 0, qid 0 00:14:07.757 [2024-05-14 23:01:19.921827] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.757 [2024-05-14 23:01:19.921839] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.757 [2024-05-14 23:01:19.921844] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921849] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbab950) on tqpair=0xb63280 00:14:07.757 [2024-05-14 23:01:19.921855] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:07.757 [2024-05-14 23:01:19.921866] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.921875] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb63280) 00:14:07.757 [2024-05-14 23:01:19.921884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.757 [2024-05-14 23:01:19.921905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab950, cid 0, qid 0 00:14:07.757 [2024-05-14 23:01:19.921986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.757 [2024-05-14 23:01:19.921994] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.757 [2024-05-14 23:01:19.921998] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.757 [2024-05-14 23:01:19.922003] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbab950) on tqpair=0xb63280 00:14:07.757 [2024-05-14 23:01:19.922008] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:07.758 [2024-05-14 23:01:19.922014] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:07.758 [2024-05-14 23:01:19.922022] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:07.758 [2024-05-14 23:01:19.922038] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:07.758 [2024-05-14 23:01:19.922050] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922055] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb63280) 00:14:07.758 [2024-05-14 23:01:19.922063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.758 [2024-05-14 23:01:19.922083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab950, cid 0, qid 0 00:14:07.758 [2024-05-14 23:01:19.922221] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:07.758 [2024-05-14 23:01:19.922229] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:07.758 [2024-05-14 23:01:19.922233] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922238] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb63280): datao=0, datal=4096, cccid=0 00:14:07.758 [2024-05-14 23:01:19.922243] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbab950) on tqpair(0xb63280): expected_datao=0, payload_size=4096 00:14:07.758 [2024-05-14 23:01:19.922248] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922257] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922262] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922272] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.758 [2024-05-14 23:01:19.922278] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.758 [2024-05-14 23:01:19.922282] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922286] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbab950) on tqpair=0xb63280 00:14:07.758 [2024-05-14 23:01:19.922296] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:07.758 [2024-05-14 23:01:19.922302] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:07.758 [2024-05-14 23:01:19.922307] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:07.758 [2024-05-14 23:01:19.922313] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:07.758 [2024-05-14 23:01:19.922318] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:07.758 [2024-05-14 23:01:19.922323] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:07.758 [2024-05-14 23:01:19.922333] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:07.758 [2024-05-14 23:01:19.922345] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922351] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922355] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb63280) 00:14:07.758 [2024-05-14 23:01:19.922363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:07.758 [2024-05-14 23:01:19.922385] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab950, cid 0, qid 0 00:14:07.758 [2024-05-14 23:01:19.922490] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.758 [2024-05-14 23:01:19.922498] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.758 [2024-05-14 23:01:19.922502] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922506] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbab950) on tqpair=0xb63280 00:14:07.758 [2024-05-14 23:01:19.922515] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922519] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922523] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb63280) 00:14:07.758 [2024-05-14 23:01:19.922531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.758 [2024-05-14 23:01:19.922537] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922542] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922546] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb63280) 00:14:07.758 [2024-05-14 23:01:19.922552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.758 [2024-05-14 23:01:19.922559] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922563] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922567] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb63280) 00:14:07.758 [2024-05-14 23:01:19.922573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.758 [2024-05-14 23:01:19.922580] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922584] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922588] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.758 [2024-05-14 23:01:19.922594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.758 [2024-05-14 23:01:19.922600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:07.758 [2024-05-14 23:01:19.922613] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:07.758 [2024-05-14 23:01:19.922622] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922626] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb63280) 00:14:07.758 [2024-05-14 23:01:19.922634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.758 [2024-05-14 23:01:19.922656] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbab950, cid 0, qid 0 00:14:07.758 [2024-05-14 23:01:19.922663] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabab0, cid 1, qid 0 00:14:07.758 [2024-05-14 23:01:19.922668] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabc10, cid 2, qid 0 00:14:07.758 [2024-05-14 23:01:19.922673] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.758 [2024-05-14 23:01:19.922678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabed0, cid 4, qid 0 00:14:07.758 [2024-05-14 23:01:19.922848] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.758 [2024-05-14 23:01:19.922865] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.758 [2024-05-14 23:01:19.922870] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922874] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabed0) on tqpair=0xb63280 00:14:07.758 [2024-05-14 23:01:19.922880] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:07.758 [2024-05-14 23:01:19.922887] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:07.758 [2024-05-14 23:01:19.922900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.922905] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb63280) 00:14:07.758 [2024-05-14 23:01:19.922913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.758 [2024-05-14 23:01:19.922935] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabed0, cid 4, qid 0 00:14:07.758 [2024-05-14 23:01:19.923046] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:07.758 [2024-05-14 23:01:19.923061] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:07.758 [2024-05-14 23:01:19.923066] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923070] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb63280): datao=0, datal=4096, cccid=4 00:14:07.758 [2024-05-14 23:01:19.923076] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbabed0) on tqpair(0xb63280): expected_datao=0, payload_size=4096 00:14:07.758 [2024-05-14 23:01:19.923081] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923089] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923093] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923102] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.758 [2024-05-14 23:01:19.923109] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.758 [2024-05-14 23:01:19.923113] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923117] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabed0) on tqpair=0xb63280 00:14:07.758 [2024-05-14 23:01:19.923131] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:07.758 [2024-05-14 23:01:19.923162] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923168] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb63280) 00:14:07.758 [2024-05-14 23:01:19.923176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.758 [2024-05-14 23:01:19.923184] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923189] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923192] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb63280) 00:14:07.758 [2024-05-14 23:01:19.923199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.758 [2024-05-14 23:01:19.923225] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabed0, cid 4, qid 0 00:14:07.758 [2024-05-14 23:01:19.923233] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbac030, cid 5, qid 0 00:14:07.758 [2024-05-14 23:01:19.923387] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:07.758 [2024-05-14 23:01:19.923403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:07.758 [2024-05-14 23:01:19.923408] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923412] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb63280): datao=0, datal=1024, cccid=4 00:14:07.758 [2024-05-14 23:01:19.923417] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbabed0) on tqpair(0xb63280): expected_datao=0, payload_size=1024 00:14:07.758 [2024-05-14 23:01:19.923422] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923430] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923434] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:07.758 [2024-05-14 23:01:19.923441] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.759 [2024-05-14 23:01:19.923447] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.759 [2024-05-14 23:01:19.923451] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.923455] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbac030) on tqpair=0xb63280 00:14:07.759 [2024-05-14 23:01:19.963897] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.759 [2024-05-14 23:01:19.963931] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.759 [2024-05-14 23:01:19.963937] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.963943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabed0) on tqpair=0xb63280 00:14:07.759 [2024-05-14 23:01:19.963974] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.963981] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb63280) 00:14:07.759 [2024-05-14 23:01:19.963995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.759 [2024-05-14 23:01:19.964070] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabed0, cid 4, qid 0 00:14:07.759 [2024-05-14 23:01:19.964251] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:07.759 [2024-05-14 23:01:19.964278] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:07.759 [2024-05-14 23:01:19.964287] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.964292] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb63280): datao=0, datal=3072, cccid=4 00:14:07.759 [2024-05-14 23:01:19.964297] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbabed0) on tqpair(0xb63280): expected_datao=0, payload_size=3072 00:14:07.759 [2024-05-14 23:01:19.964303] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.964313] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.964318] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.964328] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.759 [2024-05-14 23:01:19.964334] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.759 [2024-05-14 23:01:19.964338] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.964343] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabed0) on tqpair=0xb63280 00:14:07.759 [2024-05-14 23:01:19.964357] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.964362] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb63280) 00:14:07.759 [2024-05-14 23:01:19.964371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.759 [2024-05-14 23:01:19.964402] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabed0, cid 4, qid 0 00:14:07.759 [2024-05-14 23:01:19.964523] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:07.759 [2024-05-14 23:01:19.964540] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:07.759 [2024-05-14 23:01:19.964545] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.964550] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb63280): datao=0, datal=8, cccid=4 00:14:07.759 [2024-05-14 23:01:19.964555] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbabed0) on tqpair(0xb63280): expected_datao=0, payload_size=8 00:14:07.759 [2024-05-14 23:01:19.964560] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.964567] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:19.964571] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:20.008843] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.759 [2024-05-14 23:01:20.008893] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.759 [2024-05-14 23:01:20.008900] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.759 [2024-05-14 23:01:20.008906] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabed0) on tqpair=0xb63280 00:14:07.759 ===================================================== 00:14:07.759 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:07.759 ===================================================== 00:14:07.759 Controller Capabilities/Features 00:14:07.759 ================================ 00:14:07.759 Vendor ID: 0000 00:14:07.759 Subsystem Vendor ID: 0000 00:14:07.759 Serial Number: .................... 00:14:07.759 Model Number: ........................................ 00:14:07.759 Firmware Version: 24.05 00:14:07.759 Recommended Arb Burst: 0 00:14:07.759 IEEE OUI Identifier: 00 00 00 00:14:07.759 Multi-path I/O 00:14:07.759 May have multiple subsystem ports: No 00:14:07.759 May have multiple controllers: No 00:14:07.759 Associated with SR-IOV VF: No 00:14:07.759 Max Data Transfer Size: 131072 00:14:07.759 Max Number of Namespaces: 0 00:14:07.759 Max Number of I/O Queues: 1024 00:14:07.759 NVMe Specification Version (VS): 1.3 00:14:07.759 NVMe Specification Version (Identify): 1.3 00:14:07.759 Maximum Queue Entries: 128 00:14:07.759 Contiguous Queues Required: Yes 00:14:07.759 Arbitration Mechanisms Supported 00:14:07.759 Weighted Round Robin: Not Supported 00:14:07.759 Vendor Specific: Not Supported 00:14:07.759 Reset Timeout: 15000 ms 00:14:07.759 Doorbell Stride: 4 bytes 00:14:07.759 NVM Subsystem Reset: Not Supported 00:14:07.759 Command Sets Supported 00:14:07.759 NVM Command Set: Supported 00:14:07.759 Boot Partition: Not Supported 00:14:07.759 Memory Page Size Minimum: 4096 bytes 00:14:07.759 Memory Page Size Maximum: 4096 bytes 00:14:07.759 Persistent Memory Region: Not Supported 00:14:07.759 Optional Asynchronous Events Supported 00:14:07.759 Namespace Attribute Notices: Not Supported 00:14:07.759 Firmware Activation Notices: Not Supported 00:14:07.759 ANA Change Notices: Not Supported 00:14:07.759 PLE Aggregate Log Change Notices: Not Supported 00:14:07.759 LBA Status Info Alert Notices: Not Supported 00:14:07.759 EGE Aggregate Log Change Notices: Not Supported 00:14:07.759 Normal NVM Subsystem Shutdown event: Not Supported 00:14:07.759 Zone Descriptor Change Notices: Not Supported 00:14:07.759 Discovery Log Change Notices: Supported 00:14:07.759 Controller Attributes 00:14:07.759 128-bit Host Identifier: Not Supported 00:14:07.759 Non-Operational Permissive Mode: Not Supported 00:14:07.759 NVM Sets: Not Supported 00:14:07.759 Read Recovery Levels: Not Supported 00:14:07.759 Endurance Groups: Not Supported 00:14:07.759 Predictable Latency Mode: Not Supported 00:14:07.759 Traffic Based Keep ALive: Not Supported 00:14:07.759 Namespace Granularity: Not Supported 00:14:07.759 SQ Associations: Not Supported 00:14:07.759 UUID List: Not Supported 00:14:07.759 Multi-Domain Subsystem: Not Supported 00:14:07.759 Fixed Capacity Management: Not Supported 00:14:07.759 Variable Capacity Management: Not Supported 00:14:07.759 Delete Endurance Group: Not Supported 00:14:07.759 Delete NVM Set: Not Supported 00:14:07.759 Extended LBA Formats Supported: Not Supported 00:14:07.759 Flexible Data Placement Supported: Not Supported 00:14:07.759 00:14:07.759 Controller Memory Buffer Support 00:14:07.759 ================================ 00:14:07.759 Supported: No 00:14:07.759 00:14:07.759 Persistent Memory Region Support 00:14:07.759 ================================ 00:14:07.759 Supported: No 00:14:07.759 00:14:07.759 Admin Command Set Attributes 00:14:07.759 ============================ 00:14:07.759 Security Send/Receive: Not Supported 00:14:07.759 Format NVM: Not Supported 00:14:07.759 Firmware Activate/Download: Not Supported 00:14:07.759 Namespace Management: Not Supported 00:14:07.759 Device Self-Test: Not Supported 00:14:07.759 Directives: Not Supported 00:14:07.759 NVMe-MI: Not Supported 00:14:07.759 Virtualization Management: Not Supported 00:14:07.759 Doorbell Buffer Config: Not Supported 00:14:07.759 Get LBA Status Capability: Not Supported 00:14:07.759 Command & Feature Lockdown Capability: Not Supported 00:14:07.759 Abort Command Limit: 1 00:14:07.759 Async Event Request Limit: 4 00:14:07.759 Number of Firmware Slots: N/A 00:14:07.759 Firmware Slot 1 Read-Only: N/A 00:14:07.759 Firmware Activation Without Reset: N/A 00:14:07.759 Multiple Update Detection Support: N/A 00:14:07.759 Firmware Update Granularity: No Information Provided 00:14:07.759 Per-Namespace SMART Log: No 00:14:07.759 Asymmetric Namespace Access Log Page: Not Supported 00:14:07.759 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:07.759 Command Effects Log Page: Not Supported 00:14:07.759 Get Log Page Extended Data: Supported 00:14:07.759 Telemetry Log Pages: Not Supported 00:14:07.759 Persistent Event Log Pages: Not Supported 00:14:07.759 Supported Log Pages Log Page: May Support 00:14:07.759 Commands Supported & Effects Log Page: Not Supported 00:14:07.759 Feature Identifiers & Effects Log Page:May Support 00:14:07.759 NVMe-MI Commands & Effects Log Page: May Support 00:14:07.759 Data Area 4 for Telemetry Log: Not Supported 00:14:07.759 Error Log Page Entries Supported: 128 00:14:07.759 Keep Alive: Not Supported 00:14:07.759 00:14:07.759 NVM Command Set Attributes 00:14:07.759 ========================== 00:14:07.759 Submission Queue Entry Size 00:14:07.759 Max: 1 00:14:07.759 Min: 1 00:14:07.759 Completion Queue Entry Size 00:14:07.759 Max: 1 00:14:07.759 Min: 1 00:14:07.759 Number of Namespaces: 0 00:14:07.759 Compare Command: Not Supported 00:14:07.759 Write Uncorrectable Command: Not Supported 00:14:07.759 Dataset Management Command: Not Supported 00:14:07.759 Write Zeroes Command: Not Supported 00:14:07.759 Set Features Save Field: Not Supported 00:14:07.759 Reservations: Not Supported 00:14:07.759 Timestamp: Not Supported 00:14:07.759 Copy: Not Supported 00:14:07.759 Volatile Write Cache: Not Present 00:14:07.759 Atomic Write Unit (Normal): 1 00:14:07.759 Atomic Write Unit (PFail): 1 00:14:07.759 Atomic Compare & Write Unit: 1 00:14:07.759 Fused Compare & Write: Supported 00:14:07.759 Scatter-Gather List 00:14:07.759 SGL Command Set: Supported 00:14:07.760 SGL Keyed: Supported 00:14:07.760 SGL Bit Bucket Descriptor: Not Supported 00:14:07.760 SGL Metadata Pointer: Not Supported 00:14:07.760 Oversized SGL: Not Supported 00:14:07.760 SGL Metadata Address: Not Supported 00:14:07.760 SGL Offset: Supported 00:14:07.760 Transport SGL Data Block: Not Supported 00:14:07.760 Replay Protected Memory Block: Not Supported 00:14:07.760 00:14:07.760 Firmware Slot Information 00:14:07.760 ========================= 00:14:07.760 Active slot: 0 00:14:07.760 00:14:07.760 00:14:07.760 Error Log 00:14:07.760 ========= 00:14:07.760 00:14:07.760 Active Namespaces 00:14:07.760 ================= 00:14:07.760 Discovery Log Page 00:14:07.760 ================== 00:14:07.760 Generation Counter: 2 00:14:07.760 Number of Records: 2 00:14:07.760 Record Format: 0 00:14:07.760 00:14:07.760 Discovery Log Entry 0 00:14:07.760 ---------------------- 00:14:07.760 Transport Type: 3 (TCP) 00:14:07.760 Address Family: 1 (IPv4) 00:14:07.760 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:07.760 Entry Flags: 00:14:07.760 Duplicate Returned Information: 1 00:14:07.760 Explicit Persistent Connection Support for Discovery: 1 00:14:07.760 Transport Requirements: 00:14:07.760 Secure Channel: Not Required 00:14:07.760 Port ID: 0 (0x0000) 00:14:07.760 Controller ID: 65535 (0xffff) 00:14:07.760 Admin Max SQ Size: 128 00:14:07.760 Transport Service Identifier: 4420 00:14:07.760 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:07.760 Transport Address: 10.0.0.2 00:14:07.760 Discovery Log Entry 1 00:14:07.760 ---------------------- 00:14:07.760 Transport Type: 3 (TCP) 00:14:07.760 Address Family: 1 (IPv4) 00:14:07.760 Subsystem Type: 2 (NVM Subsystem) 00:14:07.760 Entry Flags: 00:14:07.760 Duplicate Returned Information: 0 00:14:07.760 Explicit Persistent Connection Support for Discovery: 0 00:14:07.760 Transport Requirements: 00:14:07.760 Secure Channel: Not Required 00:14:07.760 Port ID: 0 (0x0000) 00:14:07.760 Controller ID: 65535 (0xffff) 00:14:07.760 Admin Max SQ Size: 128 00:14:07.760 Transport Service Identifier: 4420 00:14:07.760 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:07.760 Transport Address: 10.0.0.2 [2024-05-14 23:01:20.009043] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:07.760 [2024-05-14 23:01:20.009064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.760 [2024-05-14 23:01:20.009074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.760 [2024-05-14 23:01:20.009081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.760 [2024-05-14 23:01:20.009088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.760 [2024-05-14 23:01:20.009104] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009110] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009114] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.760 [2024-05-14 23:01:20.009129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.760 [2024-05-14 23:01:20.009162] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.760 [2024-05-14 23:01:20.009269] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.760 [2024-05-14 23:01:20.009276] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.760 [2024-05-14 23:01:20.009280] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009285] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.760 [2024-05-14 23:01:20.009295] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009300] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009304] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.760 [2024-05-14 23:01:20.009311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.760 [2024-05-14 23:01:20.009338] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.760 [2024-05-14 23:01:20.009435] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.760 [2024-05-14 23:01:20.009442] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.760 [2024-05-14 23:01:20.009446] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009451] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.760 [2024-05-14 23:01:20.009457] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:07.760 [2024-05-14 23:01:20.009462] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:07.760 [2024-05-14 23:01:20.009472] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009478] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009482] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.760 [2024-05-14 23:01:20.009489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.760 [2024-05-14 23:01:20.009508] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.760 [2024-05-14 23:01:20.009568] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.760 [2024-05-14 23:01:20.009575] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.760 [2024-05-14 23:01:20.009579] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009584] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.760 [2024-05-14 23:01:20.009596] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009601] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009605] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.760 [2024-05-14 23:01:20.009613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.760 [2024-05-14 23:01:20.009630] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.760 [2024-05-14 23:01:20.009685] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.760 [2024-05-14 23:01:20.009692] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.760 [2024-05-14 23:01:20.009696] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009700] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.760 [2024-05-14 23:01:20.009711] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009716] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009720] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.760 [2024-05-14 23:01:20.009728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.760 [2024-05-14 23:01:20.009745] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.760 [2024-05-14 23:01:20.009817] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.760 [2024-05-14 23:01:20.009826] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.760 [2024-05-14 23:01:20.009830] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009835] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.760 [2024-05-14 23:01:20.009846] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009852] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009856] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.760 [2024-05-14 23:01:20.009864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.760 [2024-05-14 23:01:20.009885] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.760 [2024-05-14 23:01:20.009941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.760 [2024-05-14 23:01:20.009948] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.760 [2024-05-14 23:01:20.009952] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009957] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.760 [2024-05-14 23:01:20.009968] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009973] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.009977] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.760 [2024-05-14 23:01:20.009984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.760 [2024-05-14 23:01:20.010002] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.760 [2024-05-14 23:01:20.010066] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.760 [2024-05-14 23:01:20.010073] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.760 [2024-05-14 23:01:20.010077] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.010082] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.760 [2024-05-14 23:01:20.010092] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.010097] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.760 [2024-05-14 23:01:20.010101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.760 [2024-05-14 23:01:20.010109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.760 [2024-05-14 23:01:20.010127] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.760 [2024-05-14 23:01:20.010181] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.760 [2024-05-14 23:01:20.010188] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.760 [2024-05-14 23:01:20.010192] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010197] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.761 [2024-05-14 23:01:20.010208] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010213] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010217] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.761 [2024-05-14 23:01:20.010224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.761 [2024-05-14 23:01:20.010242] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.761 [2024-05-14 23:01:20.010296] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.761 [2024-05-14 23:01:20.010303] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.761 [2024-05-14 23:01:20.010307] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010311] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.761 [2024-05-14 23:01:20.010322] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010327] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010331] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.761 [2024-05-14 23:01:20.010339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.761 [2024-05-14 23:01:20.010356] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.761 [2024-05-14 23:01:20.010409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.761 [2024-05-14 23:01:20.010416] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.761 [2024-05-14 23:01:20.010420] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010424] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.761 [2024-05-14 23:01:20.010435] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010440] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010444] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.761 [2024-05-14 23:01:20.010452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.761 [2024-05-14 23:01:20.010469] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.761 [2024-05-14 23:01:20.010535] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.761 [2024-05-14 23:01:20.010542] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.761 [2024-05-14 23:01:20.010546] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010550] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.761 [2024-05-14 23:01:20.010561] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010566] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010570] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.761 [2024-05-14 23:01:20.010578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.761 [2024-05-14 23:01:20.010595] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.761 [2024-05-14 23:01:20.010650] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.761 [2024-05-14 23:01:20.010657] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.761 [2024-05-14 23:01:20.010661] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010665] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.761 [2024-05-14 23:01:20.010676] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.761 [2024-05-14 23:01:20.010693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.761 [2024-05-14 23:01:20.010710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.761 [2024-05-14 23:01:20.010776] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.761 [2024-05-14 23:01:20.010785] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.761 [2024-05-14 23:01:20.010789] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010794] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.761 [2024-05-14 23:01:20.010805] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010810] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010815] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.761 [2024-05-14 23:01:20.010822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.761 [2024-05-14 23:01:20.010842] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.761 [2024-05-14 23:01:20.010901] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.761 [2024-05-14 23:01:20.010908] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.761 [2024-05-14 23:01:20.010912] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010916] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.761 [2024-05-14 23:01:20.010928] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010933] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.010937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.761 [2024-05-14 23:01:20.010944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.761 [2024-05-14 23:01:20.010962] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.761 [2024-05-14 23:01:20.011030] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.761 [2024-05-14 23:01:20.011037] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.761 [2024-05-14 23:01:20.011041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.011046] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.761 [2024-05-14 23:01:20.011056] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.011061] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.011066] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.761 [2024-05-14 23:01:20.011074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.761 [2024-05-14 23:01:20.011092] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.761 [2024-05-14 23:01:20.011163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.761 [2024-05-14 23:01:20.011170] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.761 [2024-05-14 23:01:20.011174] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.011179] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.761 [2024-05-14 23:01:20.011190] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.011195] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.011199] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.761 [2024-05-14 23:01:20.011206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.761 [2024-05-14 23:01:20.011224] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.761 [2024-05-14 23:01:20.011279] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.761 [2024-05-14 23:01:20.011286] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.761 [2024-05-14 23:01:20.011290] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.011294] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.761 [2024-05-14 23:01:20.011305] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.011310] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.761 [2024-05-14 23:01:20.011314] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.761 [2024-05-14 23:01:20.011322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.761 [2024-05-14 23:01:20.011339] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.761 [2024-05-14 23:01:20.011392] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.761 [2024-05-14 23:01:20.011399] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.011403] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011407] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.011418] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011423] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011427] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.011436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.011454] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.011519] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.011526] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.011530] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011534] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.011546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011551] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011555] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.011562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.011580] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.011634] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.011641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.011645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011650] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.011660] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011665] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.011677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.011695] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.011749] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.011757] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.011771] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011777] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.011789] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011794] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011798] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.011806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.011825] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.011883] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.011891] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.011895] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011899] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.011910] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011915] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.011919] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.011927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.011945] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.012009] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.012016] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.012020] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012024] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.012035] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012040] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012044] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.012051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.012069] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.012123] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.012130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.012134] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012139] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.012150] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012155] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012159] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.012166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.012184] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.012235] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.012242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.012246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012251] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.012262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012271] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.012279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.012296] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.012350] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.012357] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.012361] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.012376] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012381] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012385] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.012393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.012410] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.012468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.012475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.012479] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012483] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.012494] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012499] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012503] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.012511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.012546] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.012600] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.012608] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.012612] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012616] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.012627] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012632] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012636] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.012644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.012665] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.012719] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.012727] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.012731] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012735] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.012746] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012751] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.012755] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb63280) 00:14:07.762 [2024-05-14 23:01:20.016776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:07.762 [2024-05-14 23:01:20.016820] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbabd70, cid 3, qid 0 00:14:07.762 [2024-05-14 23:01:20.016921] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:07.762 [2024-05-14 23:01:20.016929] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:07.762 [2024-05-14 23:01:20.016933] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:07.762 [2024-05-14 23:01:20.016937] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbabd70) on tqpair=0xb63280 00:14:07.762 [2024-05-14 23:01:20.016947] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:14:07.763 00:14:07.763 23:01:20 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:07.763 [2024-05-14 23:01:20.049873] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:07.763 [2024-05-14 23:01:20.049928] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80516 ] 00:14:08.043 [2024-05-14 23:01:20.191099] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:08.043 [2024-05-14 23:01:20.191172] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:08.043 [2024-05-14 23:01:20.191180] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:08.043 [2024-05-14 23:01:20.191195] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:08.043 [2024-05-14 23:01:20.191209] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:08.043 [2024-05-14 23:01:20.191346] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:08.043 [2024-05-14 23:01:20.191398] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1cf1280 0 00:14:08.043 [2024-05-14 23:01:20.195784] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:08.043 [2024-05-14 23:01:20.195808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:08.043 [2024-05-14 23:01:20.195815] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:08.043 [2024-05-14 23:01:20.195819] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:08.043 [2024-05-14 23:01:20.195863] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.043 [2024-05-14 23:01:20.195871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.043 [2024-05-14 23:01:20.195875] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf1280) 00:14:08.043 [2024-05-14 23:01:20.195891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:08.043 [2024-05-14 23:01:20.195924] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39950, cid 0, qid 0 00:14:08.043 [2024-05-14 23:01:20.203785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.043 [2024-05-14 23:01:20.203809] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.043 [2024-05-14 23:01:20.203814] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.043 [2024-05-14 23:01:20.203820] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39950) on tqpair=0x1cf1280 00:14:08.043 [2024-05-14 23:01:20.203834] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:08.043 [2024-05-14 23:01:20.203843] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:08.043 [2024-05-14 23:01:20.203849] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:08.043 [2024-05-14 23:01:20.203865] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.043 [2024-05-14 23:01:20.203871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.043 [2024-05-14 23:01:20.203876] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf1280) 00:14:08.043 [2024-05-14 23:01:20.203886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.043 [2024-05-14 23:01:20.203916] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39950, cid 0, qid 0 00:14:08.043 [2024-05-14 23:01:20.204004] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.043 [2024-05-14 23:01:20.204012] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.043 [2024-05-14 23:01:20.204016] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.043 [2024-05-14 23:01:20.204021] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39950) on tqpair=0x1cf1280 00:14:08.043 [2024-05-14 23:01:20.204028] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:08.043 [2024-05-14 23:01:20.204037] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:08.043 [2024-05-14 23:01:20.204045] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204050] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204054] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.204062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.044 [2024-05-14 23:01:20.204083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39950, cid 0, qid 0 00:14:08.044 [2024-05-14 23:01:20.204171] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.044 [2024-05-14 23:01:20.204178] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.044 [2024-05-14 23:01:20.204182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204187] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39950) on tqpair=0x1cf1280 00:14:08.044 [2024-05-14 23:01:20.204194] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:08.044 [2024-05-14 23:01:20.204204] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:08.044 [2024-05-14 23:01:20.204212] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204216] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204220] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.204228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.044 [2024-05-14 23:01:20.204248] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39950, cid 0, qid 0 00:14:08.044 [2024-05-14 23:01:20.204331] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.044 [2024-05-14 23:01:20.204338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.044 [2024-05-14 23:01:20.204342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204346] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39950) on tqpair=0x1cf1280 00:14:08.044 [2024-05-14 23:01:20.204353] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:08.044 [2024-05-14 23:01:20.204365] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204374] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.204381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.044 [2024-05-14 23:01:20.204400] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39950, cid 0, qid 0 00:14:08.044 [2024-05-14 23:01:20.204480] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.044 [2024-05-14 23:01:20.204488] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.044 [2024-05-14 23:01:20.204492] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204496] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39950) on tqpair=0x1cf1280 00:14:08.044 [2024-05-14 23:01:20.204502] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:08.044 [2024-05-14 23:01:20.204508] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:08.044 [2024-05-14 23:01:20.204517] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:08.044 [2024-05-14 23:01:20.204623] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:08.044 [2024-05-14 23:01:20.204633] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:08.044 [2024-05-14 23:01:20.204644] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204649] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204654] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.204662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.044 [2024-05-14 23:01:20.204683] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39950, cid 0, qid 0 00:14:08.044 [2024-05-14 23:01:20.204781] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.044 [2024-05-14 23:01:20.204798] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.044 [2024-05-14 23:01:20.204803] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204808] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39950) on tqpair=0x1cf1280 00:14:08.044 [2024-05-14 23:01:20.204815] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:08.044 [2024-05-14 23:01:20.204826] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204832] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204836] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.204843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.044 [2024-05-14 23:01:20.204865] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39950, cid 0, qid 0 00:14:08.044 [2024-05-14 23:01:20.204943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.044 [2024-05-14 23:01:20.204950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.044 [2024-05-14 23:01:20.204955] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.204959] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39950) on tqpair=0x1cf1280 00:14:08.044 [2024-05-14 23:01:20.204966] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:08.044 [2024-05-14 23:01:20.204971] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.204980] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:08.044 [2024-05-14 23:01:20.204996] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.205008] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205013] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.205021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.044 [2024-05-14 23:01:20.205041] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39950, cid 0, qid 0 00:14:08.044 [2024-05-14 23:01:20.205172] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:08.044 [2024-05-14 23:01:20.205180] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:08.044 [2024-05-14 23:01:20.205184] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205188] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf1280): datao=0, datal=4096, cccid=0 00:14:08.044 [2024-05-14 23:01:20.205193] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d39950) on tqpair(0x1cf1280): expected_datao=0, payload_size=4096 00:14:08.044 [2024-05-14 23:01:20.205199] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205207] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205212] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205221] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.044 [2024-05-14 23:01:20.205228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.044 [2024-05-14 23:01:20.205232] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205236] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39950) on tqpair=0x1cf1280 00:14:08.044 [2024-05-14 23:01:20.205246] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:08.044 [2024-05-14 23:01:20.205252] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:08.044 [2024-05-14 23:01:20.205257] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:08.044 [2024-05-14 23:01:20.205261] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:08.044 [2024-05-14 23:01:20.205266] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:08.044 [2024-05-14 23:01:20.205272] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.205282] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.205295] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205300] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205304] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.205313] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:08.044 [2024-05-14 23:01:20.205333] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39950, cid 0, qid 0 00:14:08.044 [2024-05-14 23:01:20.205419] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.044 [2024-05-14 23:01:20.205431] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.044 [2024-05-14 23:01:20.205436] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205441] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39950) on tqpair=0x1cf1280 00:14:08.044 [2024-05-14 23:01:20.205450] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205455] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205459] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.205466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.044 [2024-05-14 23:01:20.205473] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205477] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205481] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.205488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.044 [2024-05-14 23:01:20.205494] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205498] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205502] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.205509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.044 [2024-05-14 23:01:20.205515] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205519] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205523] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.205529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.044 [2024-05-14 23:01:20.205535] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.205548] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.205557] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205561] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.205569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.044 [2024-05-14 23:01:20.205592] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39950, cid 0, qid 0 00:14:08.044 [2024-05-14 23:01:20.205599] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39ab0, cid 1, qid 0 00:14:08.044 [2024-05-14 23:01:20.205605] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39c10, cid 2, qid 0 00:14:08.044 [2024-05-14 23:01:20.205610] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.044 [2024-05-14 23:01:20.205615] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39ed0, cid 4, qid 0 00:14:08.044 [2024-05-14 23:01:20.205755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.044 [2024-05-14 23:01:20.205787] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.044 [2024-05-14 23:01:20.205793] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205797] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39ed0) on tqpair=0x1cf1280 00:14:08.044 [2024-05-14 23:01:20.205804] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:08.044 [2024-05-14 23:01:20.205810] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.205824] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.205832] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.205840] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205845] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205849] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.205857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:08.044 [2024-05-14 23:01:20.205878] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39ed0, cid 4, qid 0 00:14:08.044 [2024-05-14 23:01:20.205955] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.044 [2024-05-14 23:01:20.205962] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.044 [2024-05-14 23:01:20.205966] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.205970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39ed0) on tqpair=0x1cf1280 00:14:08.044 [2024-05-14 23:01:20.206028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.206040] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:08.044 [2024-05-14 23:01:20.206049] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.206054] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf1280) 00:14:08.044 [2024-05-14 23:01:20.206062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.044 [2024-05-14 23:01:20.206083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39ed0, cid 4, qid 0 00:14:08.044 [2024-05-14 23:01:20.206176] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:08.044 [2024-05-14 23:01:20.206183] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:08.044 [2024-05-14 23:01:20.206187] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.206191] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf1280): datao=0, datal=4096, cccid=4 00:14:08.044 [2024-05-14 23:01:20.206196] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d39ed0) on tqpair(0x1cf1280): expected_datao=0, payload_size=4096 00:14:08.044 [2024-05-14 23:01:20.206201] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.206209] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.206213] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:08.044 [2024-05-14 23:01:20.206222] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.206229] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.206232] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206237] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39ed0) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.206253] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:08.045 [2024-05-14 23:01:20.206265] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:08.045 [2024-05-14 23:01:20.206276] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:08.045 [2024-05-14 23:01:20.206285] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206289] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.206297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.045 [2024-05-14 23:01:20.206318] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39ed0, cid 4, qid 0 00:14:08.045 [2024-05-14 23:01:20.206429] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:08.045 [2024-05-14 23:01:20.206437] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:08.045 [2024-05-14 23:01:20.206441] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206445] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf1280): datao=0, datal=4096, cccid=4 00:14:08.045 [2024-05-14 23:01:20.206450] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d39ed0) on tqpair(0x1cf1280): expected_datao=0, payload_size=4096 00:14:08.045 [2024-05-14 23:01:20.206455] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206463] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206467] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.206482] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.206486] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206490] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39ed0) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.206507] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:08.045 [2024-05-14 23:01:20.206519] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:08.045 [2024-05-14 23:01:20.206528] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206533] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.206541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.045 [2024-05-14 23:01:20.206562] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39ed0, cid 4, qid 0 00:14:08.045 [2024-05-14 23:01:20.206658] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:08.045 [2024-05-14 23:01:20.206665] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:08.045 [2024-05-14 23:01:20.206670] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206673] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf1280): datao=0, datal=4096, cccid=4 00:14:08.045 [2024-05-14 23:01:20.206679] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d39ed0) on tqpair(0x1cf1280): expected_datao=0, payload_size=4096 00:14:08.045 [2024-05-14 23:01:20.206683] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206691] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206695] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206704] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.206710] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.206714] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206718] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39ed0) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.206728] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:08.045 [2024-05-14 23:01:20.206737] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:08.045 [2024-05-14 23:01:20.206750] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:08.045 [2024-05-14 23:01:20.206758] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:08.045 [2024-05-14 23:01:20.206776] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:08.045 [2024-05-14 23:01:20.206783] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:08.045 [2024-05-14 23:01:20.206788] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:08.045 [2024-05-14 23:01:20.206794] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:08.045 [2024-05-14 23:01:20.206815] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206821] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.206828] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.045 [2024-05-14 23:01:20.206836] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206840] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.206844] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.206851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.045 [2024-05-14 23:01:20.206878] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39ed0, cid 4, qid 0 00:14:08.045 [2024-05-14 23:01:20.206885] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3a030, cid 5, qid 0 00:14:08.045 [2024-05-14 23:01:20.206989] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.206996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.207000] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207005] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39ed0) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.207013] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.207020] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.207024] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207028] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3a030) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.207039] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207044] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.207052] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.045 [2024-05-14 23:01:20.207071] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3a030, cid 5, qid 0 00:14:08.045 [2024-05-14 23:01:20.207151] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.207164] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.207169] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207173] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3a030) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.207185] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207190] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.207197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.045 [2024-05-14 23:01:20.207217] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3a030, cid 5, qid 0 00:14:08.045 [2024-05-14 23:01:20.207300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.207315] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.207320] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207325] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3a030) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.207338] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207342] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.207350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.045 [2024-05-14 23:01:20.207370] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3a030, cid 5, qid 0 00:14:08.045 [2024-05-14 23:01:20.207454] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.207462] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.207466] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207471] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3a030) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.207486] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207492] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.207499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.045 [2024-05-14 23:01:20.207507] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207511] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.207518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.045 [2024-05-14 23:01:20.207526] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207530] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.207537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.045 [2024-05-14 23:01:20.207545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.207549] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cf1280) 00:14:08.045 [2024-05-14 23:01:20.207556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.045 [2024-05-14 23:01:20.207577] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3a030, cid 5, qid 0 00:14:08.045 [2024-05-14 23:01:20.207584] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39ed0, cid 4, qid 0 00:14:08.045 [2024-05-14 23:01:20.207590] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3a190, cid 6, qid 0 00:14:08.045 [2024-05-14 23:01:20.207595] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3a2f0, cid 7, qid 0 00:14:08.045 [2024-05-14 23:01:20.211786] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:08.045 [2024-05-14 23:01:20.211808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:08.045 [2024-05-14 23:01:20.211814] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211818] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf1280): datao=0, datal=8192, cccid=5 00:14:08.045 [2024-05-14 23:01:20.211824] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3a030) on tqpair(0x1cf1280): expected_datao=0, payload_size=8192 00:14:08.045 [2024-05-14 23:01:20.211829] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211838] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211843] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211849] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:08.045 [2024-05-14 23:01:20.211855] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:08.045 [2024-05-14 23:01:20.211859] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211863] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf1280): datao=0, datal=512, cccid=4 00:14:08.045 [2024-05-14 23:01:20.211868] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d39ed0) on tqpair(0x1cf1280): expected_datao=0, payload_size=512 00:14:08.045 [2024-05-14 23:01:20.211873] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211879] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211883] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:08.045 [2024-05-14 23:01:20.211895] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:08.045 [2024-05-14 23:01:20.211899] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211903] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf1280): datao=0, datal=512, cccid=6 00:14:08.045 [2024-05-14 23:01:20.211908] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3a190) on tqpair(0x1cf1280): expected_datao=0, payload_size=512 00:14:08.045 [2024-05-14 23:01:20.211913] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211919] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211923] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211929] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:08.045 [2024-05-14 23:01:20.211935] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:08.045 [2024-05-14 23:01:20.211939] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211943] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cf1280): datao=0, datal=4096, cccid=7 00:14:08.045 [2024-05-14 23:01:20.211948] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d3a2f0) on tqpair(0x1cf1280): expected_datao=0, payload_size=4096 00:14:08.045 [2024-05-14 23:01:20.211952] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211959] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211963] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211969] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.211975] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.211980] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.211984] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3a030) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.212004] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.212012] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.212016] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.212020] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39ed0) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.212031] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.212038] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.212042] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.212046] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3a190) on tqpair=0x1cf1280 00:14:08.045 [2024-05-14 23:01:20.212057] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.045 [2024-05-14 23:01:20.212064] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.045 [2024-05-14 23:01:20.212068] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.045 [2024-05-14 23:01:20.212072] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3a2f0) on tqpair=0x1cf1280 00:14:08.045 ===================================================== 00:14:08.045 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.045 ===================================================== 00:14:08.045 Controller Capabilities/Features 00:14:08.045 ================================ 00:14:08.045 Vendor ID: 8086 00:14:08.045 Subsystem Vendor ID: 8086 00:14:08.045 Serial Number: SPDK00000000000001 00:14:08.045 Model Number: SPDK bdev Controller 00:14:08.045 Firmware Version: 24.05 00:14:08.045 Recommended Arb Burst: 6 00:14:08.045 IEEE OUI Identifier: e4 d2 5c 00:14:08.045 Multi-path I/O 00:14:08.045 May have multiple subsystem ports: Yes 00:14:08.045 May have multiple controllers: Yes 00:14:08.045 Associated with SR-IOV VF: No 00:14:08.046 Max Data Transfer Size: 131072 00:14:08.046 Max Number of Namespaces: 32 00:14:08.046 Max Number of I/O Queues: 127 00:14:08.046 NVMe Specification Version (VS): 1.3 00:14:08.046 NVMe Specification Version (Identify): 1.3 00:14:08.046 Maximum Queue Entries: 128 00:14:08.046 Contiguous Queues Required: Yes 00:14:08.046 Arbitration Mechanisms Supported 00:14:08.046 Weighted Round Robin: Not Supported 00:14:08.046 Vendor Specific: Not Supported 00:14:08.046 Reset Timeout: 15000 ms 00:14:08.046 Doorbell Stride: 4 bytes 00:14:08.046 NVM Subsystem Reset: Not Supported 00:14:08.046 Command Sets Supported 00:14:08.046 NVM Command Set: Supported 00:14:08.046 Boot Partition: Not Supported 00:14:08.046 Memory Page Size Minimum: 4096 bytes 00:14:08.046 Memory Page Size Maximum: 4096 bytes 00:14:08.046 Persistent Memory Region: Not Supported 00:14:08.046 Optional Asynchronous Events Supported 00:14:08.046 Namespace Attribute Notices: Supported 00:14:08.046 Firmware Activation Notices: Not Supported 00:14:08.046 ANA Change Notices: Not Supported 00:14:08.046 PLE Aggregate Log Change Notices: Not Supported 00:14:08.046 LBA Status Info Alert Notices: Not Supported 00:14:08.046 EGE Aggregate Log Change Notices: Not Supported 00:14:08.046 Normal NVM Subsystem Shutdown event: Not Supported 00:14:08.046 Zone Descriptor Change Notices: Not Supported 00:14:08.046 Discovery Log Change Notices: Not Supported 00:14:08.046 Controller Attributes 00:14:08.046 128-bit Host Identifier: Supported 00:14:08.046 Non-Operational Permissive Mode: Not Supported 00:14:08.046 NVM Sets: Not Supported 00:14:08.046 Read Recovery Levels: Not Supported 00:14:08.046 Endurance Groups: Not Supported 00:14:08.046 Predictable Latency Mode: Not Supported 00:14:08.046 Traffic Based Keep ALive: Not Supported 00:14:08.046 Namespace Granularity: Not Supported 00:14:08.046 SQ Associations: Not Supported 00:14:08.046 UUID List: Not Supported 00:14:08.046 Multi-Domain Subsystem: Not Supported 00:14:08.046 Fixed Capacity Management: Not Supported 00:14:08.046 Variable Capacity Management: Not Supported 00:14:08.046 Delete Endurance Group: Not Supported 00:14:08.046 Delete NVM Set: Not Supported 00:14:08.046 Extended LBA Formats Supported: Not Supported 00:14:08.046 Flexible Data Placement Supported: Not Supported 00:14:08.046 00:14:08.046 Controller Memory Buffer Support 00:14:08.046 ================================ 00:14:08.046 Supported: No 00:14:08.046 00:14:08.046 Persistent Memory Region Support 00:14:08.046 ================================ 00:14:08.046 Supported: No 00:14:08.046 00:14:08.046 Admin Command Set Attributes 00:14:08.046 ============================ 00:14:08.046 Security Send/Receive: Not Supported 00:14:08.046 Format NVM: Not Supported 00:14:08.046 Firmware Activate/Download: Not Supported 00:14:08.046 Namespace Management: Not Supported 00:14:08.046 Device Self-Test: Not Supported 00:14:08.046 Directives: Not Supported 00:14:08.046 NVMe-MI: Not Supported 00:14:08.046 Virtualization Management: Not Supported 00:14:08.046 Doorbell Buffer Config: Not Supported 00:14:08.046 Get LBA Status Capability: Not Supported 00:14:08.046 Command & Feature Lockdown Capability: Not Supported 00:14:08.046 Abort Command Limit: 4 00:14:08.046 Async Event Request Limit: 4 00:14:08.046 Number of Firmware Slots: N/A 00:14:08.046 Firmware Slot 1 Read-Only: N/A 00:14:08.046 Firmware Activation Without Reset: N/A 00:14:08.046 Multiple Update Detection Support: N/A 00:14:08.046 Firmware Update Granularity: No Information Provided 00:14:08.046 Per-Namespace SMART Log: No 00:14:08.046 Asymmetric Namespace Access Log Page: Not Supported 00:14:08.046 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:08.046 Command Effects Log Page: Supported 00:14:08.046 Get Log Page Extended Data: Supported 00:14:08.046 Telemetry Log Pages: Not Supported 00:14:08.046 Persistent Event Log Pages: Not Supported 00:14:08.046 Supported Log Pages Log Page: May Support 00:14:08.046 Commands Supported & Effects Log Page: Not Supported 00:14:08.046 Feature Identifiers & Effects Log Page:May Support 00:14:08.046 NVMe-MI Commands & Effects Log Page: May Support 00:14:08.046 Data Area 4 for Telemetry Log: Not Supported 00:14:08.046 Error Log Page Entries Supported: 128 00:14:08.046 Keep Alive: Supported 00:14:08.046 Keep Alive Granularity: 10000 ms 00:14:08.046 00:14:08.046 NVM Command Set Attributes 00:14:08.046 ========================== 00:14:08.046 Submission Queue Entry Size 00:14:08.046 Max: 64 00:14:08.046 Min: 64 00:14:08.046 Completion Queue Entry Size 00:14:08.046 Max: 16 00:14:08.046 Min: 16 00:14:08.046 Number of Namespaces: 32 00:14:08.046 Compare Command: Supported 00:14:08.046 Write Uncorrectable Command: Not Supported 00:14:08.046 Dataset Management Command: Supported 00:14:08.046 Write Zeroes Command: Supported 00:14:08.046 Set Features Save Field: Not Supported 00:14:08.046 Reservations: Supported 00:14:08.046 Timestamp: Not Supported 00:14:08.046 Copy: Supported 00:14:08.046 Volatile Write Cache: Present 00:14:08.046 Atomic Write Unit (Normal): 1 00:14:08.046 Atomic Write Unit (PFail): 1 00:14:08.046 Atomic Compare & Write Unit: 1 00:14:08.046 Fused Compare & Write: Supported 00:14:08.046 Scatter-Gather List 00:14:08.046 SGL Command Set: Supported 00:14:08.046 SGL Keyed: Supported 00:14:08.046 SGL Bit Bucket Descriptor: Not Supported 00:14:08.046 SGL Metadata Pointer: Not Supported 00:14:08.046 Oversized SGL: Not Supported 00:14:08.046 SGL Metadata Address: Not Supported 00:14:08.046 SGL Offset: Supported 00:14:08.046 Transport SGL Data Block: Not Supported 00:14:08.046 Replay Protected Memory Block: Not Supported 00:14:08.046 00:14:08.046 Firmware Slot Information 00:14:08.046 ========================= 00:14:08.046 Active slot: 1 00:14:08.046 Slot 1 Firmware Revision: 24.05 00:14:08.046 00:14:08.046 00:14:08.046 Commands Supported and Effects 00:14:08.046 ============================== 00:14:08.046 Admin Commands 00:14:08.046 -------------- 00:14:08.046 Get Log Page (02h): Supported 00:14:08.046 Identify (06h): Supported 00:14:08.046 Abort (08h): Supported 00:14:08.046 Set Features (09h): Supported 00:14:08.046 Get Features (0Ah): Supported 00:14:08.046 Asynchronous Event Request (0Ch): Supported 00:14:08.046 Keep Alive (18h): Supported 00:14:08.046 I/O Commands 00:14:08.046 ------------ 00:14:08.046 Flush (00h): Supported LBA-Change 00:14:08.046 Write (01h): Supported LBA-Change 00:14:08.046 Read (02h): Supported 00:14:08.046 Compare (05h): Supported 00:14:08.046 Write Zeroes (08h): Supported LBA-Change 00:14:08.046 Dataset Management (09h): Supported LBA-Change 00:14:08.046 Copy (19h): Supported LBA-Change 00:14:08.046 Unknown (79h): Supported LBA-Change 00:14:08.046 Unknown (7Ah): Supported 00:14:08.046 00:14:08.046 Error Log 00:14:08.046 ========= 00:14:08.046 00:14:08.046 Arbitration 00:14:08.046 =========== 00:14:08.046 Arbitration Burst: 1 00:14:08.046 00:14:08.046 Power Management 00:14:08.046 ================ 00:14:08.046 Number of Power States: 1 00:14:08.046 Current Power State: Power State #0 00:14:08.046 Power State #0: 00:14:08.046 Max Power: 0.00 W 00:14:08.046 Non-Operational State: Operational 00:14:08.046 Entry Latency: Not Reported 00:14:08.046 Exit Latency: Not Reported 00:14:08.046 Relative Read Throughput: 0 00:14:08.046 Relative Read Latency: 0 00:14:08.046 Relative Write Throughput: 0 00:14:08.046 Relative Write Latency: 0 00:14:08.046 Idle Power: Not Reported 00:14:08.046 Active Power: Not Reported 00:14:08.046 Non-Operational Permissive Mode: Not Supported 00:14:08.046 00:14:08.046 Health Information 00:14:08.046 ================== 00:14:08.046 Critical Warnings: 00:14:08.046 Available Spare Space: OK 00:14:08.046 Temperature: OK 00:14:08.046 Device Reliability: OK 00:14:08.046 Read Only: No 00:14:08.046 Volatile Memory Backup: OK 00:14:08.046 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:08.046 Temperature Threshold: [2024-05-14 23:01:20.212186] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212194] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cf1280) 00:14:08.046 [2024-05-14 23:01:20.212203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.046 [2024-05-14 23:01:20.212233] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d3a2f0, cid 7, qid 0 00:14:08.046 [2024-05-14 23:01:20.212337] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.046 [2024-05-14 23:01:20.212345] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.046 [2024-05-14 23:01:20.212349] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212353] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d3a2f0) on tqpair=0x1cf1280 00:14:08.046 [2024-05-14 23:01:20.212393] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:08.046 [2024-05-14 23:01:20.212407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.046 [2024-05-14 23:01:20.212415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.046 [2024-05-14 23:01:20.212422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.046 [2024-05-14 23:01:20.212428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.046 [2024-05-14 23:01:20.212439] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212443] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212448] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.046 [2024-05-14 23:01:20.212464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.046 [2024-05-14 23:01:20.212488] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.046 [2024-05-14 23:01:20.212577] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.046 [2024-05-14 23:01:20.212586] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.046 [2024-05-14 23:01:20.212590] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212595] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.046 [2024-05-14 23:01:20.212604] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212609] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212613] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.046 [2024-05-14 23:01:20.212621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.046 [2024-05-14 23:01:20.212645] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.046 [2024-05-14 23:01:20.212743] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.046 [2024-05-14 23:01:20.212759] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.046 [2024-05-14 23:01:20.212778] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212783] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.046 [2024-05-14 23:01:20.212790] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:08.046 [2024-05-14 23:01:20.212795] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:08.046 [2024-05-14 23:01:20.212807] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212812] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212816] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.046 [2024-05-14 23:01:20.212824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.046 [2024-05-14 23:01:20.212846] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.046 [2024-05-14 23:01:20.212931] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.046 [2024-05-14 23:01:20.212938] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.046 [2024-05-14 23:01:20.212942] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212946] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.046 [2024-05-14 23:01:20.212959] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.212968] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.046 [2024-05-14 23:01:20.212976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.046 [2024-05-14 23:01:20.212995] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.046 [2024-05-14 23:01:20.213077] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.046 [2024-05-14 23:01:20.213085] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.046 [2024-05-14 23:01:20.213089] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.213093] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.046 [2024-05-14 23:01:20.213105] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.213109] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.213114] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.046 [2024-05-14 23:01:20.213121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.046 [2024-05-14 23:01:20.213140] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.046 [2024-05-14 23:01:20.213219] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.046 [2024-05-14 23:01:20.213226] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.046 [2024-05-14 23:01:20.213230] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.213234] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.046 [2024-05-14 23:01:20.213246] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.213250] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.046 [2024-05-14 23:01:20.213254] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.046 [2024-05-14 23:01:20.213262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.046 [2024-05-14 23:01:20.213280] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.213359] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.213366] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.213370] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213374] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.213386] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213391] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213395] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.213402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.213421] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.213491] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.213503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.213507] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213512] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.213524] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213530] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213538] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.213545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.213565] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.213640] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.213655] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.213660] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213664] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.213676] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.213693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.213713] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.213808] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.213817] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.213821] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213826] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.213838] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213843] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.213855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.213875] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.213951] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.213958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.213962] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213966] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.213978] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213983] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.213987] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.213995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.214014] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.214090] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.214097] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.214101] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214105] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.214117] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214122] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214126] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.214133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.214152] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.214222] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.214229] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.214233] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214237] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.214249] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214254] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214258] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.214265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.214284] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.214360] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.214367] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.214371] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214376] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.214387] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214392] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214396] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.214404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.214422] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.214498] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.214505] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.214509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214513] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.214525] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214530] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214534] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.214541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.214560] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.214642] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.214649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.214653] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214658] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.214669] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214674] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214678] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.214686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.214705] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.214797] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.214816] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.214821] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214826] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.214838] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214843] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.214855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.214876] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.214938] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.214945] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.214949] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214954] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.214965] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214970] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.214974] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.214982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.215000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.215075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.215082] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.215086] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215091] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.215102] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215111] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.215119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.215137] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.215211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.215218] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.215222] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215226] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.215238] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.215255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.215273] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.215347] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.215354] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.215358] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215362] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.215374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215379] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215383] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.215391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.215409] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.215489] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.215497] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.215500] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215505] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.215516] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215521] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215525] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.215533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.215551] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.215627] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.215638] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.215643] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215647] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.215660] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.215669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.215676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.215695] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.219774] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.219793] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.219799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.219804] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.219820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.219826] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.219830] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cf1280) 00:14:08.047 [2024-05-14 23:01:20.219839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:08.047 [2024-05-14 23:01:20.219865] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d39d70, cid 3, qid 0 00:14:08.047 [2024-05-14 23:01:20.219939] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:08.047 [2024-05-14 23:01:20.219946] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:08.047 [2024-05-14 23:01:20.219950] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:08.047 [2024-05-14 23:01:20.219955] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d39d70) on tqpair=0x1cf1280 00:14:08.047 [2024-05-14 23:01:20.219965] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:14:08.047 0 Kelvin (-273 Celsius) 00:14:08.047 Available Spare: 0% 00:14:08.047 Available Spare Threshold: 0% 00:14:08.047 Life Percentage Used: 0% 00:14:08.047 Data Units Read: 0 00:14:08.047 Data Units Written: 0 00:14:08.047 Host Read Commands: 0 00:14:08.047 Host Write Commands: 0 00:14:08.047 Controller Busy Time: 0 minutes 00:14:08.047 Power Cycles: 0 00:14:08.047 Power On Hours: 0 hours 00:14:08.047 Unsafe Shutdowns: 0 00:14:08.047 Unrecoverable Media Errors: 0 00:14:08.047 Lifetime Error Log Entries: 0 00:14:08.047 Warning Temperature Time: 0 minutes 00:14:08.047 Critical Temperature Time: 0 minutes 00:14:08.047 00:14:08.047 Number of Queues 00:14:08.047 ================ 00:14:08.047 Number of I/O Submission Queues: 127 00:14:08.047 Number of I/O Completion Queues: 127 00:14:08.047 00:14:08.047 Active Namespaces 00:14:08.047 ================= 00:14:08.047 Namespace ID:1 00:14:08.047 Error Recovery Timeout: Unlimited 00:14:08.047 Command Set Identifier: NVM (00h) 00:14:08.047 Deallocate: Supported 00:14:08.047 Deallocated/Unwritten Error: Not Supported 00:14:08.047 Deallocated Read Value: Unknown 00:14:08.047 Deallocate in Write Zeroes: Not Supported 00:14:08.047 Deallocated Guard Field: 0xFFFF 00:14:08.047 Flush: Supported 00:14:08.047 Reservation: Supported 00:14:08.047 Namespace Sharing Capabilities: Multiple Controllers 00:14:08.047 Size (in LBAs): 131072 (0GiB) 00:14:08.047 Capacity (in LBAs): 131072 (0GiB) 00:14:08.047 Utilization (in LBAs): 131072 (0GiB) 00:14:08.047 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:08.047 EUI64: ABCDEF0123456789 00:14:08.047 UUID: a8a587c0-e52c-41a1-9c63-9d65d4fcba38 00:14:08.047 Thin Provisioning: Not Supported 00:14:08.047 Per-NS Atomic Units: Yes 00:14:08.047 Atomic Boundary Size (Normal): 0 00:14:08.047 Atomic Boundary Size (PFail): 0 00:14:08.047 Atomic Boundary Offset: 0 00:14:08.047 Maximum Single Source Range Length: 65535 00:14:08.047 Maximum Copy Length: 65535 00:14:08.047 Maximum Source Range Count: 1 00:14:08.047 NGUID/EUI64 Never Reused: No 00:14:08.047 Namespace Write Protected: No 00:14:08.047 Number of LBA Formats: 1 00:14:08.047 Current LBA Format: LBA Format #00 00:14:08.047 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:08.047 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:08.047 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:08.047 rmmod nvme_tcp 00:14:08.047 rmmod nvme_fabrics 00:14:08.047 rmmod nvme_keyring 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 80457 ']' 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 80457 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 80457 ']' 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 80457 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80457 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:08.048 killing process with pid 80457 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80457' 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 80457 00:14:08.048 [2024-05-14 23:01:20.357612] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:08.048 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 80457 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:08.306 00:14:08.306 real 0m2.562s 00:14:08.306 user 0m7.151s 00:14:08.306 sys 0m0.592s 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.306 23:01:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:08.306 ************************************ 00:14:08.306 END TEST nvmf_identify 00:14:08.306 ************************************ 00:14:08.306 23:01:20 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:08.306 23:01:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:08.306 23:01:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:08.306 23:01:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.306 ************************************ 00:14:08.306 START TEST nvmf_perf 00:14:08.306 ************************************ 00:14:08.306 23:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:08.564 * Looking for test storage... 00:14:08.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.564 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:08.565 Cannot find device "nvmf_tgt_br" 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.565 Cannot find device "nvmf_tgt_br2" 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:08.565 Cannot find device "nvmf_tgt_br" 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:08.565 Cannot find device "nvmf_tgt_br2" 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:08.565 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:08.822 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:08.822 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:08.822 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:08.822 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:08.822 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:08.822 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:08.822 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:08.822 23:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:08.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:14:08.822 00:14:08.822 --- 10.0.0.2 ping statistics --- 00:14:08.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.822 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:14:08.822 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:08.822 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.822 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:14:08.822 00:14:08.822 --- 10.0.0.3 ping statistics --- 00:14:08.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.823 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:08.823 00:14:08.823 --- 10.0.0.1 ping statistics --- 00:14:08.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.823 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=80682 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 80682 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 80682 ']' 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:08.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:08.823 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:08.823 [2024-05-14 23:01:21.168447] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:08.823 [2024-05-14 23:01:21.168555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.115 [2024-05-14 23:01:21.305350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.115 [2024-05-14 23:01:21.368156] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.115 [2024-05-14 23:01:21.368220] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.115 [2024-05-14 23:01:21.368232] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.115 [2024-05-14 23:01:21.368240] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.115 [2024-05-14 23:01:21.368247] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.115 [2024-05-14 23:01:21.368339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.115 [2024-05-14 23:01:21.368804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.115 [2024-05-14 23:01:21.369064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.115 [2024-05-14 23:01:21.369437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.115 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:09.115 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:14:09.115 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.115 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.115 23:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:09.395 23:01:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.395 23:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:09.395 23:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:09.654 23:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:09.654 23:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:09.912 23:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:09.912 23:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:10.478 23:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:10.478 23:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:10.478 23:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:10.478 23:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:10.478 23:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:10.478 [2024-05-14 23:01:22.862351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.735 23:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:10.993 23:01:23 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:10.993 23:01:23 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:11.262 23:01:23 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:11.262 23:01:23 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:11.520 23:01:23 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.520 [2024-05-14 23:01:23.875438] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:11.520 [2024-05-14 23:01:23.875727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.520 23:01:23 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.778 23:01:24 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:11.778 23:01:24 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:11.778 23:01:24 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:11.778 23:01:24 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:13.153 Initializing NVMe Controllers 00:14:13.153 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:13.153 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:13.153 Initialization complete. Launching workers. 00:14:13.153 ======================================================== 00:14:13.154 Latency(us) 00:14:13.154 Device Information : IOPS MiB/s Average min max 00:14:13.154 PCIE (0000:00:10.0) NSID 1 from core 0: 24928.00 97.38 1283.26 299.85 7075.02 00:14:13.154 ======================================================== 00:14:13.154 Total : 24928.00 97.38 1283.26 299.85 7075.02 00:14:13.154 00:14:13.154 23:01:25 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:14.530 Initializing NVMe Controllers 00:14:14.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:14.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:14.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:14.530 Initialization complete. Launching workers. 00:14:14.530 ======================================================== 00:14:14.530 Latency(us) 00:14:14.530 Device Information : IOPS MiB/s Average min max 00:14:14.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3370.39 13.17 296.37 117.01 5207.17 00:14:14.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.42 7024.69 12041.55 00:14:14.530 ======================================================== 00:14:14.530 Total : 3493.89 13.65 574.35 117.01 12041.55 00:14:14.530 00:14:14.530 23:01:26 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:15.464 Initializing NVMe Controllers 00:14:15.464 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:15.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:15.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:15.464 Initialization complete. Launching workers. 00:14:15.464 ======================================================== 00:14:15.464 Latency(us) 00:14:15.464 Device Information : IOPS MiB/s Average min max 00:14:15.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7654.14 29.90 4182.03 741.35 14584.99 00:14:15.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2684.00 10.48 12030.76 5995.38 24206.35 00:14:15.464 ======================================================== 00:14:15.464 Total : 10338.13 40.38 6219.72 741.35 24206.35 00:14:15.464 00:14:15.722 23:01:27 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:15.722 23:01:27 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:18.250 Initializing NVMe Controllers 00:14:18.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.250 Controller IO queue size 128, less than required. 00:14:18.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.250 Controller IO queue size 128, less than required. 00:14:18.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:18.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:18.250 Initialization complete. Launching workers. 00:14:18.250 ======================================================== 00:14:18.250 Latency(us) 00:14:18.250 Device Information : IOPS MiB/s Average min max 00:14:18.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1427.48 356.87 91154.09 55320.32 177891.18 00:14:18.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 555.49 138.87 244753.49 89711.07 386584.21 00:14:18.250 ======================================================== 00:14:18.250 Total : 1982.97 495.74 134182.07 55320.32 386584.21 00:14:18.250 00:14:18.250 23:01:30 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:18.508 Initializing NVMe Controllers 00:14:18.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.508 Controller IO queue size 128, less than required. 00:14:18.508 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.508 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:18.508 Controller IO queue size 128, less than required. 00:14:18.508 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.508 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:18.508 WARNING: Some requested NVMe devices were skipped 00:14:18.508 No valid NVMe controllers or AIO or URING devices found 00:14:18.508 23:01:30 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:21.032 Initializing NVMe Controllers 00:14:21.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.032 Controller IO queue size 128, less than required. 00:14:21.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:21.032 Controller IO queue size 128, less than required. 00:14:21.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:21.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:21.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:21.032 Initialization complete. Launching workers. 00:14:21.032 00:14:21.032 ==================== 00:14:21.032 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:21.032 TCP transport: 00:14:21.032 polls: 6218 00:14:21.032 idle_polls: 3086 00:14:21.032 sock_completions: 3132 00:14:21.032 nvme_completions: 3997 00:14:21.032 submitted_requests: 6022 00:14:21.032 queued_requests: 1 00:14:21.032 00:14:21.032 ==================== 00:14:21.032 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:21.032 TCP transport: 00:14:21.032 polls: 5161 00:14:21.032 idle_polls: 2409 00:14:21.032 sock_completions: 2752 00:14:21.032 nvme_completions: 5663 00:14:21.032 submitted_requests: 8586 00:14:21.032 queued_requests: 1 00:14:21.032 ======================================================== 00:14:21.032 Latency(us) 00:14:21.032 Device Information : IOPS MiB/s Average min max 00:14:21.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 998.33 249.58 131697.74 73635.21 260812.69 00:14:21.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1414.56 353.64 91866.43 48202.52 150684.90 00:14:21.032 ======================================================== 00:14:21.032 Total : 2412.89 603.22 108346.64 48202.52 260812.69 00:14:21.032 00:14:21.032 23:01:33 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:21.032 23:01:33 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.289 rmmod nvme_tcp 00:14:21.289 rmmod nvme_fabrics 00:14:21.289 rmmod nvme_keyring 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 80682 ']' 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 80682 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 80682 ']' 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 80682 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80682 00:14:21.289 killing process with pid 80682 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80682' 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 80682 00:14:21.289 [2024-05-14 23:01:33.676945] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:21.289 23:01:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 80682 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:22.220 00:14:22.220 real 0m13.889s 00:14:22.220 user 0m50.597s 00:14:22.220 sys 0m3.413s 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:22.220 23:01:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:22.220 ************************************ 00:14:22.220 END TEST nvmf_perf 00:14:22.220 ************************************ 00:14:22.220 23:01:34 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:22.220 23:01:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:22.220 23:01:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:22.220 23:01:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:22.220 ************************************ 00:14:22.220 START TEST nvmf_fio_host 00:14:22.220 ************************************ 00:14:22.220 23:01:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:22.479 * Looking for test storage... 00:14:22.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:22.479 23:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.479 23:01:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.479 23:01:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.479 23:01:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:22.480 Cannot find device "nvmf_tgt_br" 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.480 Cannot find device "nvmf_tgt_br2" 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:22.480 Cannot find device "nvmf_tgt_br" 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:22.480 Cannot find device "nvmf_tgt_br2" 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:22.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:22.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:22.480 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:22.738 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:22.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:14:22.739 00:14:22.739 --- 10.0.0.2 ping statistics --- 00:14:22.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.739 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:22.739 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:22.739 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:22.739 00:14:22.739 --- 10.0.0.3 ping statistics --- 00:14:22.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.739 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:22.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:22.739 00:14:22.739 --- 10.0.0.1 ping statistics --- 00:14:22.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.739 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:22.739 23:01:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=81145 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 81145 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 81145 ']' 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:22.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:22.739 23:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:22.739 [2024-05-14 23:01:35.068184] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:22.739 [2024-05-14 23:01:35.068273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.997 [2024-05-14 23:01:35.205253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.997 [2024-05-14 23:01:35.292665] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.997 [2024-05-14 23:01:35.292737] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.997 [2024-05-14 23:01:35.292757] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.997 [2024-05-14 23:01:35.292792] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.997 [2024-05-14 23:01:35.292805] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.997 [2024-05-14 23:01:35.292945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.997 [2024-05-14 23:01:35.293252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.997 [2024-05-14 23:01:35.293971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.997 [2024-05-14 23:01:35.294003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:23.931 [2024-05-14 23:01:36.215011] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:23.931 Malloc1 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:23.931 [2024-05-14 23:01:36.297058] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:23.931 [2024-05-14 23:01:36.297368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:14:23.931 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:14:24.190 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:14:24.190 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:14:24.190 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:14:24.190 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:24.190 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:14:24.190 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:14:24.190 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:14:24.190 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:14:24.190 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:24.190 23:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:24.190 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:24.190 fio-3.35 00:14:24.190 Starting 1 thread 00:14:26.719 00:14:26.719 test: (groupid=0, jobs=1): err= 0: pid=81228: Tue May 14 23:01:38 2024 00:14:26.719 read: IOPS=8295, BW=32.4MiB/s (34.0MB/s)(65.1MiB/2008msec) 00:14:26.719 slat (usec): min=2, max=254, avg= 2.76, stdev= 2.57 00:14:26.719 clat (usec): min=3110, max=17721, avg=8143.50, stdev=1298.60 00:14:26.719 lat (usec): min=3139, max=17723, avg=8146.25, stdev=1298.70 00:14:26.719 clat percentiles (usec): 00:14:26.719 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 7308], 00:14:26.719 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7963], 00:14:26.719 | 70.00th=[ 8225], 80.00th=[ 8717], 90.00th=[10028], 95.00th=[10683], 00:14:26.719 | 99.00th=[13435], 99.50th=[14353], 99.90th=[15533], 99.95th=[16319], 00:14:26.719 | 99.99th=[17695] 00:14:26.719 bw ( KiB/s): min=31640, max=35416, per=99.98%, avg=33176.00, stdev=1730.38, samples=4 00:14:26.719 iops : min= 7910, max= 8854, avg=8294.00, stdev=432.60, samples=4 00:14:26.719 write: IOPS=8291, BW=32.4MiB/s (34.0MB/s)(65.0MiB/2008msec); 0 zone resets 00:14:26.719 slat (usec): min=2, max=250, avg= 2.87, stdev= 2.30 00:14:26.719 clat (usec): min=1842, max=14379, avg=7236.81, stdev=1099.38 00:14:26.719 lat (usec): min=1852, max=14381, avg=7239.68, stdev=1099.36 00:14:26.719 clat percentiles (usec): 00:14:26.719 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:14:26.719 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:14:26.719 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8717], 95.00th=[ 9372], 00:14:26.719 | 99.00th=[12125], 99.50th=[13042], 99.90th=[13960], 99.95th=[14222], 00:14:26.719 | 99.99th=[14353] 00:14:26.719 bw ( KiB/s): min=31616, max=34816, per=100.00%, avg=33184.00, stdev=1419.96, samples=4 00:14:26.719 iops : min= 7904, max= 8704, avg=8296.00, stdev=354.99, samples=4 00:14:26.719 lat (msec) : 2=0.01%, 4=0.09%, 10=93.86%, 20=6.04% 00:14:26.719 cpu : usr=66.12%, sys=24.31%, ctx=9, majf=0, minf=5 00:14:26.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:26.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:26.719 issued rwts: total=16658,16649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:26.719 00:14:26.719 Run status group 0 (all jobs): 00:14:26.719 READ: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=65.1MiB (68.2MB), run=2008-2008msec 00:14:26.719 WRITE: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=65.0MiB (68.2MB), run=2008-2008msec 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:26.719 23:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:26.719 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:26.719 fio-3.35 00:14:26.719 Starting 1 thread 00:14:29.245 00:14:29.245 test: (groupid=0, jobs=1): err= 0: pid=81272: Tue May 14 23:01:41 2024 00:14:29.245 read: IOPS=7059, BW=110MiB/s (116MB/s)(222MiB/2008msec) 00:14:29.245 slat (usec): min=3, max=120, avg= 4.06, stdev= 1.87 00:14:29.245 clat (usec): min=3428, max=21103, avg=10659.54, stdev=2642.56 00:14:29.245 lat (usec): min=3432, max=21106, avg=10663.60, stdev=2642.63 00:14:29.245 clat percentiles (usec): 00:14:29.245 | 1.00th=[ 5473], 5.00th=[ 6587], 10.00th=[ 7177], 20.00th=[ 8225], 00:14:29.245 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11338], 00:14:29.245 | 70.00th=[11994], 80.00th=[12911], 90.00th=[14353], 95.00th=[15139], 00:14:29.245 | 99.00th=[16909], 99.50th=[18220], 99.90th=[19792], 99.95th=[20055], 00:14:29.245 | 99.99th=[20055] 00:14:29.245 bw ( KiB/s): min=47744, max=70432, per=49.98%, avg=56456.00, stdev=10800.79, samples=4 00:14:29.245 iops : min= 2984, max= 4402, avg=3528.50, stdev=675.05, samples=4 00:14:29.245 write: IOPS=4261, BW=66.6MiB/s (69.8MB/s)(116MiB/1739msec); 0 zone resets 00:14:29.245 slat (usec): min=37, max=211, avg=40.07, stdev= 5.29 00:14:29.245 clat (usec): min=6907, max=26610, avg=13455.95, stdev=2962.42 00:14:29.245 lat (usec): min=6945, max=26647, avg=13496.02, stdev=2962.31 00:14:29.245 clat percentiles (usec): 00:14:29.245 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10814], 00:14:29.245 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13042], 60.00th=[13829], 00:14:29.245 | 70.00th=[14746], 80.00th=[15926], 90.00th=[17695], 95.00th=[19006], 00:14:29.245 | 99.00th=[21103], 99.50th=[21890], 99.90th=[25822], 99.95th=[26346], 00:14:29.245 | 99.99th=[26608] 00:14:29.245 bw ( KiB/s): min=49984, max=73728, per=86.48%, avg=58968.00, stdev=10853.72, samples=4 00:14:29.245 iops : min= 3124, max= 4608, avg=3685.50, stdev=678.36, samples=4 00:14:29.245 lat (msec) : 4=0.03%, 10=30.71%, 20=68.36%, 50=0.90% 00:14:29.245 cpu : usr=73.36%, sys=18.18%, ctx=20, majf=0, minf=22 00:14:29.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:14:29.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:29.245 issued rwts: total=14176,7411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:29.245 00:14:29.245 Run status group 0 (all jobs): 00:14:29.245 READ: bw=110MiB/s (116MB/s), 110MiB/s-110MiB/s (116MB/s-116MB/s), io=222MiB (232MB), run=2008-2008msec 00:14:29.245 WRITE: bw=66.6MiB/s (69.8MB/s), 66.6MiB/s-66.6MiB/s (69.8MB/s-69.8MB/s), io=116MiB (121MB), run=1739-1739msec 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.245 rmmod nvme_tcp 00:14:29.245 rmmod nvme_fabrics 00:14:29.245 rmmod nvme_keyring 00:14:29.245 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 81145 ']' 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 81145 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 81145 ']' 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 81145 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81145 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81145' 00:14:29.246 killing process with pid 81145 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 81145 00:14:29.246 [2024-05-14 23:01:41.393296] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 81145 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:29.246 00:14:29.246 real 0m7.052s 00:14:29.246 user 0m27.773s 00:14:29.246 sys 0m2.027s 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:29.246 23:01:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:29.246 ************************************ 00:14:29.246 END TEST nvmf_fio_host 00:14:29.246 ************************************ 00:14:29.504 23:01:41 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:29.504 23:01:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:29.504 23:01:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:29.504 23:01:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.504 ************************************ 00:14:29.504 START TEST nvmf_failover 00:14:29.504 ************************************ 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:29.504 * Looking for test storage... 00:14:29.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.504 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:29.505 Cannot find device "nvmf_tgt_br" 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:29.505 Cannot find device "nvmf_tgt_br2" 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:29.505 Cannot find device "nvmf_tgt_br" 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:29.505 Cannot find device "nvmf_tgt_br2" 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:29.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:29.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:29.505 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:29.764 23:01:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:29.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:14:29.764 00:14:29.764 --- 10.0.0.2 ping statistics --- 00:14:29.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.764 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:29.764 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:29.764 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:14:29.764 00:14:29.764 --- 10.0.0.3 ping statistics --- 00:14:29.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.764 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:29.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:14:29.764 00:14:29.764 --- 10.0.0.1 ping statistics --- 00:14:29.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.764 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=81477 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 81477 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 81477 ']' 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:29.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:29.764 23:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:30.031 [2024-05-14 23:01:42.164970] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:30.031 [2024-05-14 23:01:42.165092] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.031 [2024-05-14 23:01:42.306596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:30.031 [2024-05-14 23:01:42.393637] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.031 [2024-05-14 23:01:42.393725] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.031 [2024-05-14 23:01:42.393748] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.031 [2024-05-14 23:01:42.393780] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.031 [2024-05-14 23:01:42.393795] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.031 [2024-05-14 23:01:42.393921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.031 [2024-05-14 23:01:42.395072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.031 [2024-05-14 23:01:42.395097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.967 23:01:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:30.967 23:01:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:14:30.967 23:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.967 23:01:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.967 23:01:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:30.967 23:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.967 23:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:31.233 [2024-05-14 23:01:43.372299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.233 23:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:31.492 Malloc0 00:14:31.492 23:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:31.750 23:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:32.008 23:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.008 [2024-05-14 23:01:44.397011] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:32.008 [2024-05-14 23:01:44.397315] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.265 23:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:32.265 [2024-05-14 23:01:44.641423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:32.523 23:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:32.523 [2024-05-14 23:01:44.897648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:32.781 23:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81593 00:14:32.781 23:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:32.781 23:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:32.781 23:01:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81593 /var/tmp/bdevperf.sock 00:14:32.781 23:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 81593 ']' 00:14:32.781 23:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.781 23:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:32.781 23:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.781 23:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:32.781 23:01:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:33.037 23:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:33.037 23:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:14:33.037 23:01:45 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:33.602 NVMe0n1 00:14:33.602 23:01:45 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:33.861 00:14:33.861 23:01:46 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81627 00:14:33.861 23:01:46 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.861 23:01:46 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:34.803 23:01:47 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.069 [2024-05-14 23:01:47.350803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.350882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.350900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.350914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.350927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.350941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.350954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.350967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.350980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.350994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.070 [2024-05-14 23:01:47.351556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.351998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.352006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.352014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.352023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.352031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.352039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 [2024-05-14 23:01:47.352048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bc310 is same with the state(5) to be set 00:14:35.071 23:01:47 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:38.358 23:01:50 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:38.358 00:14:38.358 23:01:50 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:38.617 [2024-05-14 23:01:50.983736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.983993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.617 [2024-05-14 23:01:50.984234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.618 [2024-05-14 23:01:50.984644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bceb0 is same with the state(5) to be set 00:14:38.876 23:01:51 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:42.156 23:01:54 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.156 [2024-05-14 23:01:54.238714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.156 23:01:54 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:14:43.089 23:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:43.346 [2024-05-14 23:01:55.644098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 [2024-05-14 23:01:55.644393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2414840 is same with the state(5) to be set 00:14:43.346 23:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 81627 00:14:49.912 0 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 81593 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 81593 ']' 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 81593 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81593 00:14:49.912 killing process with pid 81593 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81593' 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 81593 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 81593 00:14:49.912 23:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:49.912 [2024-05-14 23:01:44.970616] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:49.912 [2024-05-14 23:01:44.970721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81593 ] 00:14:49.912 [2024-05-14 23:01:45.106814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.912 [2024-05-14 23:01:45.176881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.912 Running I/O for 15 seconds... 00:14:49.912 [2024-05-14 23:01:47.352708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.912 [2024-05-14 23:01:47.352800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.912 [2024-05-14 23:01:47.352853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.912 [2024-05-14 23:01:47.352886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.912 [2024-05-14 23:01:47.352919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.912 [2024-05-14 23:01:47.352949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.912 [2024-05-14 23:01:47.352979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.912 [2024-05-14 23:01:47.353007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.912 [2024-05-14 23:01:47.353038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.912 [2024-05-14 23:01:47.353066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.912 [2024-05-14 23:01:47.353095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.912 [2024-05-14 23:01:47.353120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.912 [2024-05-14 23:01:47.353150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.912 [2024-05-14 23:01:47.353177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.912 [2024-05-14 23:01:47.353202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.912 [2024-05-14 23:01:47.353227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.912 [2024-05-14 23:01:47.353253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.353976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.353990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.913 [2024-05-14 23:01:47.354476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.913 [2024-05-14 23:01:47.354490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.354976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.354991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.914 [2024-05-14 23:01:47.355441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.914 [2024-05-14 23:01:47.355454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.915 [2024-05-14 23:01:47.355483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.915 [2024-05-14 23:01:47.355512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.915 [2024-05-14 23:01:47.355541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.355978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.355994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.915 [2024-05-14 23:01:47.356587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.915 [2024-05-14 23:01:47.356601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.356982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.356998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.357031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.357048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.357061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.357077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.916 [2024-05-14 23:01:47.357091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.357106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1830 is same with the state(5) to be set 00:14:49.916 [2024-05-14 23:01:47.357125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.916 [2024-05-14 23:01:47.357136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.916 [2024-05-14 23:01:47.357149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73416 len:8 PRP1 0x0 PRP2 0x0 00:14:49.916 [2024-05-14 23:01:47.357163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.357218] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14c1830 was disconnected and freed. reset controller. 00:14:49.916 [2024-05-14 23:01:47.357237] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:49.916 [2024-05-14 23:01:47.357305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.916 [2024-05-14 23:01:47.357340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.357369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.916 [2024-05-14 23:01:47.357391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.357414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.916 [2024-05-14 23:01:47.357435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.357457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.916 [2024-05-14 23:01:47.357479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:47.357502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:49.916 [2024-05-14 23:01:47.361604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:49.916 [2024-05-14 23:01:47.361661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14525f0 (9): Bad file descriptor 00:14:49.916 [2024-05-14 23:01:47.418894] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:49.916 [2024-05-14 23:01:50.983165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.916 [2024-05-14 23:01:50.983246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:50.983269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.916 [2024-05-14 23:01:50.983313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:50.983330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.916 [2024-05-14 23:01:50.983343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:50.983357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.916 [2024-05-14 23:01:50.983371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:50.983384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14525f0 is same with the state(5) to be set 00:14:49.916 [2024-05-14 23:01:50.985313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.916 [2024-05-14 23:01:50.985348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:50.985374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.916 [2024-05-14 23:01:50.985390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:50.985406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.916 [2024-05-14 23:01:50.985420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:50.985436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.916 [2024-05-14 23:01:50.985450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:50.985466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.916 [2024-05-14 23:01:50.985480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.916 [2024-05-14 23:01:50.985495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.985836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.917 [2024-05-14 23:01:50.985868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.917 [2024-05-14 23:01:50.985898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.917 [2024-05-14 23:01:50.985928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.917 [2024-05-14 23:01:50.985957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.985973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.917 [2024-05-14 23:01:50.985986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.917 [2024-05-14 23:01:50.986015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.917 [2024-05-14 23:01:50.986053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.986084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.986113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.986142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.986171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.986200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.986230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.986259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.986288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.917 [2024-05-14 23:01:50.986317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.917 [2024-05-14 23:01:50.986333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.918 [2024-05-14 23:01:50.986736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.986779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.986818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.986849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.986879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.986908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.986937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.986966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.986981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.986995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.918 [2024-05-14 23:01:50.987345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.918 [2024-05-14 23:01:50.987359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.987979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.987996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.919 [2024-05-14 23:01:50.988525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.919 [2024-05-14 23:01:50.988555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.920 [2024-05-14 23:01:50.988570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.988586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.920 [2024-05-14 23:01:50.988600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.988615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.920 [2024-05-14 23:01:50.988629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.988644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.920 [2024-05-14 23:01:50.988658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.988674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.920 [2024-05-14 23:01:50.988687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.988703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.920 [2024-05-14 23:01:50.988717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.988732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.920 [2024-05-14 23:01:50.988753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.988804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.988822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56376 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.988835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.988857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.988868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.988879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56384 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.988892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.988906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.988916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.988927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56392 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.988940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.988953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.988963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.988974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56400 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.988987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56408 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56416 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56424 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55720 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55728 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55736 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55744 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55752 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55760 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55768 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55776 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55784 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.920 [2024-05-14 23:01:50.989602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.920 [2024-05-14 23:01:50.989612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55792 len:8 PRP1 0x0 PRP2 0x0 00:14:49.920 [2024-05-14 23:01:50.989625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.920 [2024-05-14 23:01:50.989679] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x166c300 was disconnected and freed. reset controller. 00:14:49.920 [2024-05-14 23:01:50.989698] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:14:49.920 [2024-05-14 23:01:50.989712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:49.921 [2024-05-14 23:01:50.993787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:49.921 [2024-05-14 23:01:50.993859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14525f0 (9): Bad file descriptor 00:14:49.921 [2024-05-14 23:01:51.026945] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:49.921 [2024-05-14 23:01:55.644482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.644971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.644985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.921 [2024-05-14 23:01:55.645521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.921 [2024-05-14 23:01:55.645536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.645962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.645978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.646000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.646030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.646060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.646090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.646118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.646147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.922 [2024-05-14 23:01:55.646176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.922 [2024-05-14 23:01:55.646205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.922 [2024-05-14 23:01:55.646235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.922 [2024-05-14 23:01:55.646264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.922 [2024-05-14 23:01:55.646293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.922 [2024-05-14 23:01:55.646322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.922 [2024-05-14 23:01:55.646337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.922 [2024-05-14 23:01:55.646351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.923 [2024-05-14 23:01:55.646388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.646974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.646987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.923 [2024-05-14 23:01:55.647381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.923 [2024-05-14 23:01:55.647396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.924 [2024-05-14 23:01:55.647409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.924 [2024-05-14 23:01:55.647438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.924 [2024-05-14 23:01:55.647467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.924 [2024-05-14 23:01:55.647496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.924 [2024-05-14 23:01:55.647525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.924 [2024-05-14 23:01:55.647554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.924 [2024-05-14 23:01:55.647591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.924 [2024-05-14 23:01:55.647620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:49.924 [2024-05-14 23:01:55.647649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.647707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122736 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.647720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.924 [2024-05-14 23:01:55.647866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.924 [2024-05-14 23:01:55.647895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.924 [2024-05-14 23:01:55.647923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.924 [2024-05-14 23:01:55.647950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.647964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14525f0 is same with the state(5) to be set 00:14:49.924 [2024-05-14 23:01:55.648218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122744 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.648280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121784 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.648329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121792 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.648391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121800 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.648438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121808 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.648484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121816 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.648551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121824 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.648601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121832 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.648648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121840 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.648695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121848 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.924 [2024-05-14 23:01:55.648750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.924 [2024-05-14 23:01:55.648779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.924 [2024-05-14 23:01:55.648803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121856 len:8 PRP1 0x0 PRP2 0x0 00:14:49.924 [2024-05-14 23:01:55.648820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.648835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.648845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.648855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121864 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.648868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.648882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.648891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.648902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121872 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.648915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.648928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.648938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.648949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121880 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.648964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.648979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.648989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.648999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121888 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.649012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.649026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.649036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.649046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121896 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.649059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.649073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.649082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.649092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121904 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.649106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.649119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.649129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.649140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121912 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.649162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.649177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.649187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.649198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121920 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.649211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.649225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.649235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.649245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121928 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.649258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.649272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.649281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.649291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121936 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.649304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.649318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.649328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.649338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121944 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.649353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.649367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.649377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.649387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121952 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.649400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.649414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.649424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.649434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121960 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.649447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.649461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.649470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.649480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121968 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.663238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.663316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.663334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.663378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121976 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.663399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.663419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.663445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.663457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121984 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.663470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.663484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.663494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.663505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121992 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.663518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.663532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.663541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.663552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122000 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.663565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.663579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.663589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.663600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122008 len:8 PRP1 0x0 PRP2 0x0 00:14:49.925 [2024-05-14 23:01:55.663614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.925 [2024-05-14 23:01:55.663628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.925 [2024-05-14 23:01:55.663638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.925 [2024-05-14 23:01:55.663648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122016 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.663661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.663675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.663685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.663695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122024 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.663708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.663722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.663732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.663742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122032 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.663755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.663793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.663807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.663818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122040 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.663832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.663846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.663856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.663867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122048 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.663880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.663894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.663904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.663914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122056 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.663927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.663941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.663957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.663967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122064 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.663980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.663994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122072 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122080 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122088 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122096 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122104 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122112 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122120 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122128 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122136 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122144 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122152 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122160 len:8 PRP1 0x0 PRP2 0x0 00:14:49.926 [2024-05-14 23:01:55.664625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.926 [2024-05-14 23:01:55.664639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.926 [2024-05-14 23:01:55.664649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.926 [2024-05-14 23:01:55.664660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122168 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.664673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.664688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.664698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.664708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122176 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.664721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.664735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.664745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.664755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122184 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.664782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.664797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.664807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.664818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122192 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.664831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.664844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.664854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.664865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122200 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.664879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.664893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.664903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.664914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122208 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.664927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.664941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.664951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.664961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122216 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.664974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.664993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122224 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122232 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122240 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122248 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122256 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122264 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122272 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122280 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122288 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122296 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122304 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122312 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122320 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122328 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122336 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122344 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122352 len:8 PRP1 0x0 PRP2 0x0 00:14:49.927 [2024-05-14 23:01:55.665860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.927 [2024-05-14 23:01:55.665874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.927 [2024-05-14 23:01:55.665886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.927 [2024-05-14 23:01:55.665897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122360 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.665910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.665924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.665934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.665945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122368 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.665958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.665971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.665981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.665991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122376 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122384 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122392 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122400 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122408 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121728 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121736 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121744 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121752 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121760 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121768 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121776 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122416 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122424 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.928 [2024-05-14 23:01:55.666679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122432 len:8 PRP1 0x0 PRP2 0x0 00:14:49.928 [2024-05-14 23:01:55.666692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.928 [2024-05-14 23:01:55.666705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.928 [2024-05-14 23:01:55.666715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.666726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122440 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.666738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.666752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.666776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.666798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122448 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.666815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.666829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.683872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.683963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122456 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.684024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.684077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.684105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.684134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122464 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.684171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.684208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.684236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.684265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122472 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.684342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.684380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.684407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.684435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122480 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.684473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.684512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.684588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.684622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122488 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.684659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.684699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.684730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.684761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122496 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.684830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.684870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.684897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.684925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122504 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.684961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.684997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.685025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.685054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122512 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.685090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.685126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.685152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.685179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122520 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.685216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.685253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.685280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.685308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122528 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.685343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.685380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.685407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.685459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122536 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.685498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.685535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.685562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.685591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122544 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.685627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.685661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.685686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.685712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122552 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.685745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.685807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.685835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.685863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122560 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.685899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.685936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.685963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.685991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122568 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.686027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.686063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.686090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.686118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122576 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.686153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.686190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.686217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.686244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122584 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.686282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.686320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.929 [2024-05-14 23:01:55.686345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.929 [2024-05-14 23:01:55.686374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122592 len:8 PRP1 0x0 PRP2 0x0 00:14:49.929 [2024-05-14 23:01:55.686409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.929 [2024-05-14 23:01:55.686465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.686492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.686519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122600 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.686551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.686586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.686613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.686641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122608 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.686677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.686713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.686738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.686817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122616 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.686863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.686905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.686937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.686965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122624 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.687002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.687039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.687065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.687094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122632 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.687132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.687174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.687203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.687230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122640 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.687267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.687304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.687330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.687358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122648 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.687394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.687431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.687458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.687485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122656 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.687541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.687578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.687604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.687633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122664 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.687671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.687709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.687736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.687790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122672 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.687834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.687876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.687907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.687936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122680 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.687972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.688009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.688037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.688066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122688 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.688101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.688138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.688165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.688192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122696 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.688229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.688266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.688293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.688321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122704 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.688357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.688393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.688420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.688448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122712 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.688483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.688519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.688569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.688622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122720 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.688661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.688699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.688725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.688752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122728 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.688813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.688849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:49.930 [2024-05-14 23:01:55.688874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:49.930 [2024-05-14 23:01:55.688900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122736 len:8 PRP1 0x0 PRP2 0x0 00:14:49.930 [2024-05-14 23:01:55.688934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.930 [2024-05-14 23:01:55.689044] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14c6b90 was disconnected and freed. reset controller. 00:14:49.930 [2024-05-14 23:01:55.689098] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:14:49.930 [2024-05-14 23:01:55.689134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:49.930 [2024-05-14 23:01:55.689268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14525f0 (9): Bad file descriptor 00:14:49.930 [2024-05-14 23:01:55.695406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:49.930 [2024-05-14 23:01:55.739880] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:49.930 00:14:49.930 Latency(us) 00:14:49.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.930 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:49.930 Verification LBA range: start 0x0 length 0x4000 00:14:49.930 NVMe0n1 : 15.01 8363.82 32.67 239.66 0.00 14842.44 845.27 52428.80 00:14:49.930 =================================================================================================================== 00:14:49.930 Total : 8363.82 32.67 239.66 0.00 14842.44 845.27 52428.80 00:14:49.930 Received shutdown signal, test time was about 15.000000 seconds 00:14:49.930 00:14:49.931 Latency(us) 00:14:49.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.931 =================================================================================================================== 00:14:49.931 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:49.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81830 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81830 /var/tmp/bdevperf.sock 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 81830 ']' 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:49.931 23:02:01 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:50.494 23:02:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:50.494 23:02:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:14:50.494 23:02:02 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:50.751 [2024-05-14 23:02:02.933096] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:50.751 23:02:02 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:51.007 [2024-05-14 23:02:03.189388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:51.007 23:02:03 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:51.263 NVMe0n1 00:14:51.263 23:02:03 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:51.520 00:14:51.520 23:02:03 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:52.084 00:14:52.084 23:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:52.084 23:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:52.342 23:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:52.907 23:02:05 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:56.185 23:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:56.185 23:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:56.185 23:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:56.185 23:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=81974 00:14:56.185 23:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 81974 00:14:57.163 0 00:14:57.163 23:02:09 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:57.163 [2024-05-14 23:02:01.589083] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:57.163 [2024-05-14 23:02:01.589884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81830 ] 00:14:57.163 [2024-05-14 23:02:01.730340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.163 [2024-05-14 23:02:01.791834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.163 [2024-05-14 23:02:05.034728] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:57.163 [2024-05-14 23:02:05.034869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.163 [2024-05-14 23:02:05.034896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.163 [2024-05-14 23:02:05.034915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.163 [2024-05-14 23:02:05.034930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.163 [2024-05-14 23:02:05.034944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.163 [2024-05-14 23:02:05.034958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.163 [2024-05-14 23:02:05.034972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.163 [2024-05-14 23:02:05.034986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.163 [2024-05-14 23:02:05.035001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:57.163 [2024-05-14 23:02:05.035053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:57.163 [2024-05-14 23:02:05.035085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134a5f0 (9): Bad file descriptor 00:14:57.163 [2024-05-14 23:02:05.046258] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:57.163 Running I/O for 1 seconds... 00:14:57.163 00:14:57.163 Latency(us) 00:14:57.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.163 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:57.163 Verification LBA range: start 0x0 length 0x4000 00:14:57.163 NVMe0n1 : 1.01 8727.84 34.09 0.00 0.00 14576.47 2174.60 16086.11 00:14:57.163 =================================================================================================================== 00:14:57.163 Total : 8727.84 34.09 0.00 0.00 14576.47 2174.60 16086.11 00:14:57.163 23:02:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:57.163 23:02:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:57.422 23:02:09 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:57.989 23:02:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:57.989 23:02:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:58.247 23:02:10 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:58.506 23:02:10 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:01.795 23:02:13 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:01.795 23:02:13 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 81830 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 81830 ']' 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 81830 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81830 00:15:01.795 killing process with pid 81830 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81830' 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 81830 00:15:01.795 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 81830 00:15:02.068 23:02:14 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:02.068 23:02:14 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.327 rmmod nvme_tcp 00:15:02.327 rmmod nvme_fabrics 00:15:02.327 rmmod nvme_keyring 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 81477 ']' 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 81477 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 81477 ']' 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 81477 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81477 00:15:02.327 killing process with pid 81477 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81477' 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 81477 00:15:02.327 [2024-05-14 23:02:14.572453] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:02.327 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 81477 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:02.585 00:15:02.585 real 0m33.129s 00:15:02.585 user 2m10.014s 00:15:02.585 sys 0m4.586s 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:02.585 23:02:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:02.585 ************************************ 00:15:02.585 END TEST nvmf_failover 00:15:02.585 ************************************ 00:15:02.585 23:02:14 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:02.585 23:02:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:02.585 23:02:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:02.585 23:02:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.585 ************************************ 00:15:02.585 START TEST nvmf_host_discovery 00:15:02.585 ************************************ 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:02.585 * Looking for test storage... 00:15:02.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.585 23:02:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:02.586 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:02.844 Cannot find device "nvmf_tgt_br" 00:15:02.844 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:02.844 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.844 Cannot find device "nvmf_tgt_br2" 00:15:02.844 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:02.844 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:02.844 23:02:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:02.844 Cannot find device "nvmf_tgt_br" 00:15:02.844 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:02.844 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:02.844 Cannot find device "nvmf_tgt_br2" 00:15:02.844 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:02.844 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:02.844 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:02.844 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:02.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.844 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:02.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:02.845 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:03.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:15:03.103 00:15:03.103 --- 10.0.0.2 ping statistics --- 00:15:03.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.103 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:03.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:03.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:03.103 00:15:03.103 --- 10.0.0.3 ping statistics --- 00:15:03.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.103 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:03.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:03.103 00:15:03.103 --- 10.0.0.1 ping statistics --- 00:15:03.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.103 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=82284 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 82284 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 82284 ']' 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:03.103 23:02:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.103 [2024-05-14 23:02:15.408600] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:03.103 [2024-05-14 23:02:15.408729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.362 [2024-05-14 23:02:15.549278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.362 [2024-05-14 23:02:15.608926] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.362 [2024-05-14 23:02:15.608979] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.362 [2024-05-14 23:02:15.608992] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.362 [2024-05-14 23:02:15.609000] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.362 [2024-05-14 23:02:15.609007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.362 [2024-05-14 23:02:15.609038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 [2024-05-14 23:02:16.533499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 [2024-05-14 23:02:16.541417] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:04.296 [2024-05-14 23:02:16.541687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 null0 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 null1 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=82334 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 82334 /tmp/host.sock 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 82334 ']' 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:04.296 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:04.296 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 [2024-05-14 23:02:16.625804] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:04.296 [2024-05-14 23:02:16.625886] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82334 ] 00:15:04.555 [2024-05-14 23:02:16.762893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.555 [2024-05-14 23:02:16.824576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:04.555 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.814 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:04.814 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:04.814 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.814 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:04.814 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.814 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.814 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:04.814 23:02:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:04.814 23:02:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.814 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.073 [2024-05-14 23:02:17.277806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.073 23:02:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:05.331 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.331 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:15:05.331 23:02:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:15:05.590 [2024-05-14 23:02:17.926841] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:05.590 [2024-05-14 23:02:17.926904] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:05.590 [2024-05-14 23:02:17.926926] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:05.848 [2024-05-14 23:02:18.012981] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:05.849 [2024-05-14 23:02:18.068954] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:05.849 [2024-05-14 23:02:18.069021] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.454 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.455 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.714 [2024-05-14 23:02:18.914503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:06.714 [2024-05-14 23:02:18.915626] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:06.714 [2024-05-14 23:02:18.915670] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.714 23:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:06.714 [2024-05-14 23:02:19.001002] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:06.714 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.714 [2024-05-14 23:02:19.059348] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:06.714 [2024-05-14 23:02:19.059389] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:06.714 [2024-05-14 23:02:19.059398] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:06.714 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:06.714 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.714 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:06.714 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:06.714 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.714 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.714 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:06.715 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:15:06.715 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:06.715 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.715 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.715 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:06.715 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:06.715 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:06.715 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.975 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.975 [2024-05-14 23:02:19.195189] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:06.975 [2024-05-14 23:02:19.195238] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:06.975 [2024-05-14 23:02:19.196436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.975 [2024-05-14 23:02:19.196485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.976 [2024-05-14 23:02:19.196502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.976 [2024-05-14 23:02:19.196513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.976 [2024-05-14 23:02:19.196523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.976 [2024-05-14 23:02:19.196532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.976 [2024-05-14 23:02:19.196543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.976 [2024-05-14 23:02:19.196552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.976 [2024-05-14 23:02:19.196562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9592d0 is same with the state(5) to be set 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:06.976 [2024-05-14 23:02:19.206383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9592d0 (9): Bad file descriptor 00:15:06.976 [2024-05-14 23:02:19.216413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:06.976 [2024-05-14 23:02:19.216610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.216671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.216689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9592d0 with addr=10.0.0.2, port=4420 00:15:06.976 [2024-05-14 23:02:19.216702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9592d0 is same with the state(5) to be set 00:15:06.976 [2024-05-14 23:02:19.216724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9592d0 (9): Bad file descriptor 00:15:06.976 [2024-05-14 23:02:19.216741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:06.976 [2024-05-14 23:02:19.216752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:06.976 [2024-05-14 23:02:19.216776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:06.976 [2024-05-14 23:02:19.216797] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.976 [2024-05-14 23:02:19.226510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:06.976 [2024-05-14 23:02:19.226669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.226723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.226740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9592d0 with addr=10.0.0.2, port=4420 00:15:06.976 [2024-05-14 23:02:19.226752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9592d0 is same with the state(5) to be set 00:15:06.976 [2024-05-14 23:02:19.226788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9592d0 (9): Bad file descriptor 00:15:06.976 [2024-05-14 23:02:19.226808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:06.976 [2024-05-14 23:02:19.226819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:06.976 [2024-05-14 23:02:19.226830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:06.976 [2024-05-14 23:02:19.226847] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:06.976 [2024-05-14 23:02:19.236627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:06.976 [2024-05-14 23:02:19.236835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.236894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.236912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9592d0 with addr=10.0.0.2, port=4420 00:15:06.976 [2024-05-14 23:02:19.236924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9592d0 is same with the state(5) to be set 00:15:06.976 [2024-05-14 23:02:19.236946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9592d0 (9): Bad file descriptor 00:15:06.976 [2024-05-14 23:02:19.236964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:06.976 [2024-05-14 23:02:19.236974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:06.976 [2024-05-14 23:02:19.236985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:06.976 [2024-05-14 23:02:19.237002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:06.976 [2024-05-14 23:02:19.246744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:06.976 [2024-05-14 23:02:19.246929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.246985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.247002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9592d0 with addr=10.0.0.2, port=4420 00:15:06.976 [2024-05-14 23:02:19.247015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9592d0 is same with the state(5) to be set 00:15:06.976 [2024-05-14 23:02:19.247036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9592d0 (9): Bad file descriptor 00:15:06.976 [2024-05-14 23:02:19.247053] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:06.976 [2024-05-14 23:02:19.247063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:06.976 [2024-05-14 23:02:19.247075] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:06.976 [2024-05-14 23:02:19.247092] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:06.976 [2024-05-14 23:02:19.256857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:06.976 [2024-05-14 23:02:19.257016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.257068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.257085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9592d0 with addr=10.0.0.2, port=4420 00:15:06.976 [2024-05-14 23:02:19.257098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9592d0 is same with the state(5) to be set 00:15:06.976 [2024-05-14 23:02:19.257118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9592d0 (9): Bad file descriptor 00:15:06.976 [2024-05-14 23:02:19.257135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:06.976 [2024-05-14 23:02:19.257145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:06.976 [2024-05-14 23:02:19.257155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:06.976 [2024-05-14 23:02:19.257171] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.976 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:06.976 [2024-05-14 23:02:19.266949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:06.976 [2024-05-14 23:02:19.267072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.267123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.976 [2024-05-14 23:02:19.267140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9592d0 with addr=10.0.0.2, port=4420 00:15:06.976 [2024-05-14 23:02:19.267152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9592d0 is same with the state(5) to be set 00:15:06.976 [2024-05-14 23:02:19.267171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9592d0 (9): Bad file descriptor 00:15:06.976 [2024-05-14 23:02:19.267188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:06.977 [2024-05-14 23:02:19.267197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:06.977 [2024-05-14 23:02:19.267207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:06.977 [2024-05-14 23:02:19.267223] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:06.977 [2024-05-14 23:02:19.277022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:06.977 [2024-05-14 23:02:19.277155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.977 [2024-05-14 23:02:19.277209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:06.977 [2024-05-14 23:02:19.277227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9592d0 with addr=10.0.0.2, port=4420 00:15:06.977 [2024-05-14 23:02:19.277240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9592d0 is same with the state(5) to be set 00:15:06.977 [2024-05-14 23:02:19.277259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9592d0 (9): Bad file descriptor 00:15:06.977 [2024-05-14 23:02:19.277287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:06.977 [2024-05-14 23:02:19.277298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:06.977 [2024-05-14 23:02:19.277309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:06.977 [2024-05-14 23:02:19.277326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:06.977 [2024-05-14 23:02:19.281297] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:06.977 [2024-05-14 23:02:19.281330] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:06.977 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:07.236 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:07.237 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.496 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:07.496 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:07.496 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:07.496 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:07.496 23:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:07.496 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.496 23:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.430 [2024-05-14 23:02:20.657529] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:08.430 [2024-05-14 23:02:20.657575] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:08.430 [2024-05-14 23:02:20.657597] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:08.430 [2024-05-14 23:02:20.743671] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:08.430 [2024-05-14 23:02:20.803315] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:08.430 [2024-05-14 23:02:20.803389] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.431 2024/05/14 23:02:20 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:15:08.431 request: 00:15:08.431 { 00:15:08.431 "method": "bdev_nvme_start_discovery", 00:15:08.431 "params": { 00:15:08.431 "name": "nvme", 00:15:08.431 "trtype": "tcp", 00:15:08.431 "traddr": "10.0.0.2", 00:15:08.431 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:08.431 "adrfam": "ipv4", 00:15:08.431 "trsvcid": "8009", 00:15:08.431 "wait_for_attach": true 00:15:08.431 } 00:15:08.431 } 00:15:08.431 Got JSON-RPC error response 00:15:08.431 GoRPCClient: error on JSON-RPC call 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:08.431 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.689 2024/05/14 23:02:20 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:15:08.689 request: 00:15:08.689 { 00:15:08.689 "method": "bdev_nvme_start_discovery", 00:15:08.689 "params": { 00:15:08.689 "name": "nvme_second", 00:15:08.689 "trtype": "tcp", 00:15:08.689 "traddr": "10.0.0.2", 00:15:08.689 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:08.689 "adrfam": "ipv4", 00:15:08.689 "trsvcid": "8009", 00:15:08.689 "wait_for_attach": true 00:15:08.689 } 00:15:08.689 } 00:15:08.689 Got JSON-RPC error response 00:15:08.689 GoRPCClient: error on JSON-RPC call 00:15:08.689 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:08.690 23:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.690 23:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:08.690 23:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:08.690 23:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:08.690 23:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:08.690 23:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:08.690 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.690 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.690 23:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:08.690 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.949 23:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.885 [2024-05-14 23:02:22.109039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:09.885 [2024-05-14 23:02:22.109148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:09.885 [2024-05-14 23:02:22.109171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x970e30 with addr=10.0.0.2, port=8010 00:15:09.885 [2024-05-14 23:02:22.109191] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:09.885 [2024-05-14 23:02:22.109202] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:09.885 [2024-05-14 23:02:22.109211] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:10.820 [2024-05-14 23:02:23.109047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:10.820 [2024-05-14 23:02:23.109167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:10.820 [2024-05-14 23:02:23.109188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x970e30 with addr=10.0.0.2, port=8010 00:15:10.820 [2024-05-14 23:02:23.109208] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:10.820 [2024-05-14 23:02:23.109218] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:10.820 [2024-05-14 23:02:23.109228] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:11.753 [2024-05-14 23:02:24.108886] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:11.753 2024/05/14 23:02:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:15:11.753 request: 00:15:11.753 { 00:15:11.753 "method": "bdev_nvme_start_discovery", 00:15:11.753 "params": { 00:15:11.753 "name": "nvme_second", 00:15:11.753 "trtype": "tcp", 00:15:11.753 "traddr": "10.0.0.2", 00:15:11.753 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:11.753 "adrfam": "ipv4", 00:15:11.753 "trsvcid": "8010", 00:15:11.753 "attach_timeout_ms": 3000 00:15:11.753 } 00:15:11.753 } 00:15:11.753 Got JSON-RPC error response 00:15:11.753 GoRPCClient: error on JSON-RPC call 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:11.753 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 82334 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.012 rmmod nvme_tcp 00:15:12.012 rmmod nvme_fabrics 00:15:12.012 rmmod nvme_keyring 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 82284 ']' 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 82284 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 82284 ']' 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 82284 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82284 00:15:12.012 killing process with pid 82284 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82284' 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 82284 00:15:12.012 [2024-05-14 23:02:24.281753] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:12.012 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 82284 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:12.271 00:15:12.271 real 0m9.664s 00:15:12.271 user 0m18.930s 00:15:12.271 sys 0m1.463s 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:12.271 ************************************ 00:15:12.271 END TEST nvmf_host_discovery 00:15:12.271 ************************************ 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.271 23:02:24 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:12.271 23:02:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:12.271 23:02:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:12.271 23:02:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:12.271 ************************************ 00:15:12.271 START TEST nvmf_host_multipath_status 00:15:12.271 ************************************ 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:12.271 * Looking for test storage... 00:15:12.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.271 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.272 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:12.531 Cannot find device "nvmf_tgt_br" 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.531 Cannot find device "nvmf_tgt_br2" 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:12.531 Cannot find device "nvmf_tgt_br" 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:12.531 Cannot find device "nvmf_tgt_br2" 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.531 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:12.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:15:12.790 00:15:12.790 --- 10.0.0.2 ping statistics --- 00:15:12.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.790 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:12.790 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.790 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:12.790 00:15:12.790 --- 10.0.0.3 ping statistics --- 00:15:12.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.790 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:15:12.790 00:15:12.790 --- 10.0.0.1 ping statistics --- 00:15:12.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.790 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:12.790 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=82780 00:15:12.791 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:12.791 23:02:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 82780 00:15:12.791 23:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 82780 ']' 00:15:12.791 23:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.791 23:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:12.791 23:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.791 23:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:12.791 23:02:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:12.791 [2024-05-14 23:02:25.047634] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:12.791 [2024-05-14 23:02:25.047722] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.049 [2024-05-14 23:02:25.183930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:13.049 [2024-05-14 23:02:25.276379] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.049 [2024-05-14 23:02:25.276455] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.049 [2024-05-14 23:02:25.276476] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.049 [2024-05-14 23:02:25.276492] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.049 [2024-05-14 23:02:25.276505] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.049 [2024-05-14 23:02:25.276655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.049 [2024-05-14 23:02:25.276936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.983 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:13.983 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:15:13.983 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.983 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.983 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:13.983 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.983 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82780 00:15:13.984 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:13.984 [2024-05-14 23:02:26.271984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.984 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:14.242 Malloc0 00:15:14.242 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:14.499 23:02:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.138 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.138 [2024-05-14 23:02:27.385586] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:15.138 [2024-05-14 23:02:27.385881] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.138 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:15.396 [2024-05-14 23:02:27.634005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:15.396 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82885 00:15:15.396 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:15.396 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.396 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82885 /var/tmp/bdevperf.sock 00:15:15.396 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 82885 ']' 00:15:15.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.396 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.396 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:15.396 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.396 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:15.396 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:15.654 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:15.654 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:15:15.654 23:02:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:16.221 23:02:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:16.479 Nvme0n1 00:15:16.479 23:02:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:17.046 Nvme0n1 00:15:17.046 23:02:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:17.046 23:02:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:18.948 23:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:18.948 23:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:19.206 23:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:19.486 23:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:20.448 23:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:20.448 23:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:20.448 23:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.448 23:02:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:21.013 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.013 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:21.013 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.013 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:21.270 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:21.270 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:21.270 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.270 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:21.527 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.527 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:21.527 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.527 23:02:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:21.784 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.784 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:21.784 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.784 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:22.041 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.041 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:22.041 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.041 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:22.607 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.607 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:22.607 23:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:22.865 23:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:23.122 23:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:24.052 23:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:24.052 23:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:24.052 23:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.052 23:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:24.308 23:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:24.308 23:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:24.567 23:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.567 23:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:24.825 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:24.825 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:24.825 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.825 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:25.083 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.083 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:25.083 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:25.083 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.340 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.340 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:25.340 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.340 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:25.597 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.597 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:25.597 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.597 23:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:26.160 23:02:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.160 23:02:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:26.160 23:02:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:26.417 23:02:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:26.675 23:02:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:27.604 23:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:27.604 23:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:27.604 23:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.604 23:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:28.206 23:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.206 23:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:28.206 23:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.206 23:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:28.463 23:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:28.463 23:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:28.463 23:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.463 23:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:28.722 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.722 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:28.722 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.722 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:28.980 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.980 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:28.980 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.980 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:29.242 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.242 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:29.242 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.242 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:29.502 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.502 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:29.502 23:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:29.760 23:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:30.326 23:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:31.266 23:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:31.266 23:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:31.266 23:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.266 23:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:31.527 23:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:31.527 23:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:31.527 23:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.527 23:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:31.785 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:31.785 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:31.785 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.785 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:32.352 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:32.352 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:32.352 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:32.352 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:32.610 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:32.611 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:32.611 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:32.611 23:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:32.869 23:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:32.869 23:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:32.869 23:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:32.869 23:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:33.127 23:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:33.127 23:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:33.127 23:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:33.385 23:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:33.643 23:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:34.578 23:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:34.578 23:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:34.578 23:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:34.578 23:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:35.143 23:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:35.143 23:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:35.143 23:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:35.144 23:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:35.401 23:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:35.401 23:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:35.401 23:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:35.401 23:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:35.659 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:35.659 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:35.659 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:35.659 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:36.248 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:36.248 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:36.248 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.248 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:36.248 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:36.248 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:36.248 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:36.248 23:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.814 23:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:36.814 23:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:36.814 23:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:37.072 23:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:37.331 23:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:38.279 23:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:38.279 23:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:38.279 23:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:38.279 23:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:38.537 23:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:38.537 23:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:38.537 23:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:38.537 23:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:38.795 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:38.795 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:38.795 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:38.795 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.054 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.054 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:39.054 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:39.054 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.620 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.620 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:39.620 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.620 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:39.620 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:39.620 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:39.620 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.620 23:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:39.878 23:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.878 23:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:40.136 23:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:40.136 23:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:40.394 23:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:40.961 23:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:41.900 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:41.900 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:41.900 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.900 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:42.158 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.158 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:42.158 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.158 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:42.416 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.416 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:42.416 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.416 23:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:42.983 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.983 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:42.983 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.983 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:43.241 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:43.241 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:43.241 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:43.241 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:43.499 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:43.499 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:43.499 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:43.499 23:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:44.065 23:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.065 23:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:44.065 23:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:44.324 23:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:44.582 23:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:15:45.517 23:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:15:45.517 23:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:45.517 23:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.517 23:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:46.083 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:46.083 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:46.083 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:46.083 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:46.341 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:46.341 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:46.341 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:46.341 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:46.600 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:46.600 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:46.600 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:46.600 23:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:46.858 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:46.858 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:46.858 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:46.858 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:47.424 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.424 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:47.424 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.424 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:47.682 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.682 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:15:47.682 23:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:47.939 23:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:48.505 23:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:15:49.440 23:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:15:49.440 23:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:49.440 23:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.440 23:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:49.699 23:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:49.699 23:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:49.699 23:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.699 23:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:49.957 23:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:49.957 23:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:49.957 23:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.957 23:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:50.523 23:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.523 23:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:50.523 23:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.523 23:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:50.781 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.781 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:50.781 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.781 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:51.039 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.039 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:51.039 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.039 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:51.297 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.297 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:15:51.297 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:51.554 23:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:51.812 23:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:15:52.745 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:15:52.745 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:52.745 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:52.745 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.003 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:53.003 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:53.003 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:53.003 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.570 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:53.570 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:53.570 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.570 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:53.828 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:53.828 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:53.828 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.828 23:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:54.087 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.087 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:54.087 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:54.087 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.345 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.345 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:54.345 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.345 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82885 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 82885 ']' 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 82885 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82885 00:15:54.603 killing process with pid 82885 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82885' 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 82885 00:15:54.603 23:03:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 82885 00:15:54.603 Connection closed with partial response: 00:15:54.603 00:15:54.603 00:15:54.873 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82885 00:15:54.873 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:54.873 [2024-05-14 23:02:27.698275] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:54.873 [2024-05-14 23:02:27.698397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82885 ] 00:15:54.873 [2024-05-14 23:02:27.834212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.873 [2024-05-14 23:02:27.917692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.873 Running I/O for 90 seconds... 00:15:54.873 [2024-05-14 23:02:45.695944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.873 [2024-05-14 23:02:45.696039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:54.873 [2024-05-14 23:02:45.696130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.873 [2024-05-14 23:02:45.696162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:54.873 [2024-05-14 23:02:45.696208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.873 [2024-05-14 23:02:45.696235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:54.873 [2024-05-14 23:02:45.696269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.873 [2024-05-14 23:02:45.696294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:54.873 [2024-05-14 23:02:45.696329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.873 [2024-05-14 23:02:45.696356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:54.873 [2024-05-14 23:02:45.696391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.873 [2024-05-14 23:02:45.696416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:54.873 [2024-05-14 23:02:45.696450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.873 [2024-05-14 23:02:45.696475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:54.873 [2024-05-14 23:02:45.696508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.873 [2024-05-14 23:02:45.696532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:54.873 [2024-05-14 23:02:45.697399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.873 [2024-05-14 23:02:45.697442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.697490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.697520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.697560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.697619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.697660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.697688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.697725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.697751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.697810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.697849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.697888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.697913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.697950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.697977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.698947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.698973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.699940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.699969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.700008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.700034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.700070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.700115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:54.874 [2024-05-14 23:02:45.700154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.874 [2024-05-14 23:02:45.700182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.700247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.700312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.700376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.700441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.700506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.700570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.700636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.700720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.700803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.700870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.700951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.700991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.701020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.701088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.701154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.701221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.701286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.701350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.875 [2024-05-14 23:02:45.701416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.701481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.701547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.701612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.701679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.701751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.701857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.701921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.701959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.701986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.702051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.702115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.702180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.702244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.702308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.702372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.702435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.702502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.702843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.702939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.702985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.703012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.703058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.703086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.703131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.703157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:54.875 [2024-05-14 23:02:45.703200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.875 [2024-05-14 23:02:45.703227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.703270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.703298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.703340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.703368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.703417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.703447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.703489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.703519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.703560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.703588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.703630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.703659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.703701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.703729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.703786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.703836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.703882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.703910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.703953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.703980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.704962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.704990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.705953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.705995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.706024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.706066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.706094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:02:45.706138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.876 [2024-05-14 23:02:45.706166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:54.876 [2024-05-14 23:03:04.104141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.876 [2024-05-14 23:03:04.104234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.104293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.104328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.104365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.104395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.104433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.104462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.104503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.104533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.104604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.104635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.104673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.104722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.104787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.104822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.104864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.104895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.104935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.104964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.105034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.105114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.105182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.105959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.105997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.106028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.106067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.106097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.106146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.106176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.106214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.106244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.106284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.106316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.106355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.106402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.108685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.108783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.108843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.108881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.108934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.108965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.109004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.109035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.109071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.109099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.109133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.877 [2024-05-14 23:03:04.109164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.109205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.109236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.109274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.877 [2024-05-14 23:03:04.109304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:54.877 [2024-05-14 23:03:04.109341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.109372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.109413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.109445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.109483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.109512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.109552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.109582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.109641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.109672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.109711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.109745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.109813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.109843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.109880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.109908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.109945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.109986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.110057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.110127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.110193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.110261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.110325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.110392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.110462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.110556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.110625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.110696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.110786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.110865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.110934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.110971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.110999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.111035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.111063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.111099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.111126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.111162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.111190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.111229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.111260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.111299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.111329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.111367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.111416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.111457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.111491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.111532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.111563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.111600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.878 [2024-05-14 23:03:04.111629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.114458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.114523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.114578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.114612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.114655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.878 [2024-05-14 23:03:04.114685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:54.878 [2024-05-14 23:03:04.114723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.114753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.114816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.114847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.114884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.114915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.114951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.114982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.115052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.115149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.115215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.115285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.115354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.115423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.115492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.115565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.115640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.115701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.115793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.115867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.115932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.115971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.116001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.116090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.116159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.116233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.116300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.116367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.116437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.116497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.116559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.116623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.116690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.116801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.116872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.116936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.116968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.117009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.117041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.118087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.118154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.118209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.118242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.118280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.118307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.118346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.118377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.118416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.118447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.118485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.879 [2024-05-14 23:03:04.118515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.118553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.118583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.118622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.879 [2024-05-14 23:03:04.118655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:54.879 [2024-05-14 23:03:04.118695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.118725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.118783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.118817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.118855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.118905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.118945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.118975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.119013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.119043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.119079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.119109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.119147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.119186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.119224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.119254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.119958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.120014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.120102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.120175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.120247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.120316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.120389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.120476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.120547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.120617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.120684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.120792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.120863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.120936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.120975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.121006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.121070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.121144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.121210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.121277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.121344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.121441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.121509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.121582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.121651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.121714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.121807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.121878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.121951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.121990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.122022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.122957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.123005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.123054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.123087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.123126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.880 [2024-05-14 23:03:04.123154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.123214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.123243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:54.880 [2024-05-14 23:03:04.123280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.880 [2024-05-14 23:03:04.123308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.123343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.123371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.123409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.123437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.123474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.123504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.123541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.123569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.123604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.123631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.123670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.123702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.124303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.124352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.124400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.124431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.124468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.124498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.124537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.124570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.124610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.124661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.124717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.124751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.124823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.124857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.124895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.124923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.124962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.124991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.125031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.125062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.125101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.125131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.125171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.125202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.125248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.125278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.125317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.125348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.125389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.125419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.127104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.127213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.127292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.127357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.127421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.127495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.127562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.127631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.127697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.127783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.127859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.127927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.127965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.127997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.128048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.128078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.128133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.128171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.128212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.881 [2024-05-14 23:03:04.128242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.128280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.128308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.128345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.881 [2024-05-14 23:03:04.128376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:54.881 [2024-05-14 23:03:04.128415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.128458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.132603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.132669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.132737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.132797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.132843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.132873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.132911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.132939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.132986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.133017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.133083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.133150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.133238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.133303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.133369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.133435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.133501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.133576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.133639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.133703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.133796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.133867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.133936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.133977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.134009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.134148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.134218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.134286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.134353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.134422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.134491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.882 [2024-05-14 23:03:04.134560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.134632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.134711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.134798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.134865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.134931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.134970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.135020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.135062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.882 [2024-05-14 23:03:04.135092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:54.882 [2024-05-14 23:03:04.135130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.135161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.135201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.135234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.135283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.135311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.136713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.136794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.136854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.136887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.136938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.136970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.137591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.137659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.137937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.137973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.138013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.138049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.138089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.138124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.138166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.138205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.139564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.139627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.139702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.139752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.139831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.139863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.139903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.139934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.139972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.140002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.140073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.140142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.140208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.140274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.140340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.140406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.140475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.140543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.140630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.140711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.140807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.883 [2024-05-14 23:03:04.140932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.140973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.883 [2024-05-14 23:03:04.141004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:54.883 [2024-05-14 23:03:04.141045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.141634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.141845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.141876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.143838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.143904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.143959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.143993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.144069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.144205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.144274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.144356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.144450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.144527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.144601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.144668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.144751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.144850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.144920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.144957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.144988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.145060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.145131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.145197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.145284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.145354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.145457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.145527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.145590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.145657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.884 [2024-05-14 23:03:04.145723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.145780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.145813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.148846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.148905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.148957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.884 [2024-05-14 23:03:04.148990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:54.884 [2024-05-14 23:03:04.149031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.149908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.149943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.149970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.150032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.150098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.150181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.150257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.150330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.150399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.150470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.150548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.150626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.150698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.150740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.150816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.152935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.152998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.153056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.885 [2024-05-14 23:03:04.153088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.153126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.153154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.153188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.153249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.153291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.153320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.153355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.153386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.153424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.153451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.153486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.153512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.153548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.153574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.155508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.155573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.155629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.155659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.155694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.155721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.155756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.155805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.155844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.155874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.155909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.885 [2024-05-14 23:03:04.155937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:54.885 [2024-05-14 23:03:04.155971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.155997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.156454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.156949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.156983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.157009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.157074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.157136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.157199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.157259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.157324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.157389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.157457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.157524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.157592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.157675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.157745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.157843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.157906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.157941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.157967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.158005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.158034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.158072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.158100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.158135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.158161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.158199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.886 [2024-05-14 23:03:04.158229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.158267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.886 [2024-05-14 23:03:04.158297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:54.886 [2024-05-14 23:03:04.158338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.158366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.158402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.158430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.160337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.887 [2024-05-14 23:03:04.160419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.160500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.887 [2024-05-14 23:03:04.160535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.160579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.887 [2024-05-14 23:03:04.160608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.160643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.887 [2024-05-14 23:03:04.160670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.160726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.160758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.160819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.160846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.160882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.160911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.160946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.160971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.161005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.161035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.161076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:54.887 [2024-05-14 23:03:04.161105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.161139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.161168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.161206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.161237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.161274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.161303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.161361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.161392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:54.887 [2024-05-14 23:03:04.161427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:54.887 [2024-05-14 23:03:04.161458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:54.887 Received shutdown signal, test time was about 37.550811 seconds 00:15:54.887 00:15:54.887 Latency(us) 00:15:54.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.887 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:54.887 Verification LBA range: start 0x0 length 0x4000 00:15:54.887 Nvme0n1 : 37.55 7928.69 30.97 0.00 0.00 16108.10 242.04 4026531.84 00:15:54.887 =================================================================================================================== 00:15:54.887 Total : 7928.69 30.97 0.00 0.00 16108.10 242.04 4026531.84 00:15:54.887 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.146 rmmod nvme_tcp 00:15:55.146 rmmod nvme_fabrics 00:15:55.146 rmmod nvme_keyring 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 82780 ']' 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 82780 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 82780 ']' 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 82780 00:15:55.146 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:15:55.147 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:55.147 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82780 00:15:55.147 killing process with pid 82780 00:15:55.147 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:55.147 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:55.147 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82780' 00:15:55.147 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 82780 00:15:55.147 [2024-05-14 23:03:07.447199] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:55.147 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 82780 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:55.405 00:15:55.405 real 0m43.115s 00:15:55.405 user 2m22.825s 00:15:55.405 sys 0m10.579s 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:55.405 23:03:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:55.405 ************************************ 00:15:55.405 END TEST nvmf_host_multipath_status 00:15:55.405 ************************************ 00:15:55.405 23:03:07 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:55.405 23:03:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:55.405 23:03:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:55.405 23:03:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.405 ************************************ 00:15:55.405 START TEST nvmf_discovery_remove_ifc 00:15:55.405 ************************************ 00:15:55.405 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:55.664 * Looking for test storage... 00:15:55.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.664 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:55.665 Cannot find device "nvmf_tgt_br" 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.665 Cannot find device "nvmf_tgt_br2" 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:55.665 Cannot find device "nvmf_tgt_br" 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:55.665 Cannot find device "nvmf_tgt_br2" 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.665 23:03:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.665 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.665 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.665 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.665 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.665 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:55.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:55.923 00:15:55.923 --- 10.0.0.2 ping statistics --- 00:15:55.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.923 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:55.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:55.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:55.923 00:15:55.923 --- 10.0.0.3 ping statistics --- 00:15:55.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.923 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:55.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:55.923 00:15:55.923 --- 10.0.0.1 ping statistics --- 00:15:55.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.923 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.923 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=84218 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 84218 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 84218 ']' 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:55.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:55.924 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:55.924 [2024-05-14 23:03:08.257911] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:55.924 [2024-05-14 23:03:08.257993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.182 [2024-05-14 23:03:08.397398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.182 [2024-05-14 23:03:08.467216] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.183 [2024-05-14 23:03:08.467272] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.183 [2024-05-14 23:03:08.467286] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.183 [2024-05-14 23:03:08.467297] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.183 [2024-05-14 23:03:08.467306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.183 [2024-05-14 23:03:08.467335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.183 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:56.183 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:15:56.183 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.183 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:56.183 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:56.442 [2024-05-14 23:03:08.604801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.442 [2024-05-14 23:03:08.612681] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:56.442 [2024-05-14 23:03:08.612981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:56.442 null0 00:15:56.442 [2024-05-14 23:03:08.644842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=84249 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 84249 /tmp/host.sock 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 84249 ']' 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:56.442 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:56.442 23:03:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:56.442 [2024-05-14 23:03:08.728588] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:56.442 [2024-05-14 23:03:08.728696] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84249 ] 00:15:56.700 [2024-05-14 23:03:08.867491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.700 [2024-05-14 23:03:08.953322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.632 23:03:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:58.565 [2024-05-14 23:03:10.768086] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:58.565 [2024-05-14 23:03:10.768134] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:58.565 [2024-05-14 23:03:10.768155] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:58.565 [2024-05-14 23:03:10.854239] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:58.565 [2024-05-14 23:03:10.910150] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:58.565 [2024-05-14 23:03:10.910223] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:58.565 [2024-05-14 23:03:10.910254] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:58.565 [2024-05-14 23:03:10.910272] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:58.565 [2024-05-14 23:03:10.910300] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:58.565 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.565 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:58.565 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:58.565 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.565 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.565 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:58.565 [2024-05-14 23:03:10.916587] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19c0760 was disconnected and freed. delete nvme_qpair. 00:15:58.565 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:58.565 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:58.565 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:58.565 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:58.823 23:03:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:58.823 23:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.823 23:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:58.823 23:03:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:59.773 23:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:59.773 23:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.773 23:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.773 23:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:59.773 23:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:59.773 23:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:59.773 23:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:59.773 23:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.773 23:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:59.773 23:03:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:00.707 23:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:00.707 23:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.707 23:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.707 23:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:00.707 23:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:00.707 23:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:00.707 23:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:00.966 23:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.966 23:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:00.966 23:03:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:02.184 23:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:02.184 23:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.184 23:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.184 23:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:02.184 23:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:02.184 23:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:02.184 23:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:02.184 23:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.184 23:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:02.184 23:03:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:03.117 23:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:03.117 23:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.117 23:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:03.117 23:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.117 23:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:03.117 23:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:03.117 23:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:03.117 23:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.117 23:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:03.117 23:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:04.124 23:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:04.124 23:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.124 23:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:04.124 23:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:04.124 23:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:04.124 23:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.124 23:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:04.124 23:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.124 23:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:04.124 23:03:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:04.124 [2024-05-14 23:03:16.338129] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:04.124 [2024-05-14 23:03:16.338191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.124 [2024-05-14 23:03:16.338208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.124 [2024-05-14 23:03:16.338221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.124 [2024-05-14 23:03:16.338232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.124 [2024-05-14 23:03:16.338243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.124 [2024-05-14 23:03:16.338252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.124 [2024-05-14 23:03:16.338263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.124 [2024-05-14 23:03:16.338272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.124 [2024-05-14 23:03:16.338283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.124 [2024-05-14 23:03:16.338292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.124 [2024-05-14 23:03:16.338302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a3d0 is same with the state(5) to be set 00:16:04.124 [2024-05-14 23:03:16.348122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198a3d0 (9): Bad file descriptor 00:16:04.124 [2024-05-14 23:03:16.358144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:05.061 23:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:05.061 23:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:05.061 23:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.061 23:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:05.061 23:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:05.061 23:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:05.061 23:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:05.061 [2024-05-14 23:03:17.382816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:06.437 [2024-05-14 23:03:18.406870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:06.437 [2024-05-14 23:03:18.407751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198a3d0 with addr=10.0.0.2, port=4420 00:16:06.437 [2024-05-14 23:03:18.408011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a3d0 is same with the state(5) to be set 00:16:06.437 [2024-05-14 23:03:18.408986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198a3d0 (9): Bad file descriptor 00:16:06.437 [2024-05-14 23:03:18.409544] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:06.438 [2024-05-14 23:03:18.409889] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:06.438 [2024-05-14 23:03:18.409984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.438 [2024-05-14 23:03:18.410291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.438 [2024-05-14 23:03:18.410431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.438 [2024-05-14 23:03:18.410511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.438 [2024-05-14 23:03:18.410741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.438 [2024-05-14 23:03:18.410908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.438 [2024-05-14 23:03:18.411017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.438 [2024-05-14 23:03:18.411093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.438 [2024-05-14 23:03:18.411173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.438 [2024-05-14 23:03:18.411435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.438 [2024-05-14 23:03:18.411574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:06.438 [2024-05-14 23:03:18.411690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19291c0 (9): Bad file descriptor 00:16:06.438 [2024-05-14 23:03:18.412138] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:06.438 [2024-05-14 23:03:18.412279] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:06.438 23:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.438 23:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:06.438 23:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:07.373 23:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:08.307 [2024-05-14 23:03:20.420370] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:08.307 [2024-05-14 23:03:20.420421] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:08.307 [2024-05-14 23:03:20.420462] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:08.308 [2024-05-14 23:03:20.506527] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:08.308 23:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:08.308 23:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:08.308 23:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:08.308 23:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.308 23:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:08.308 23:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:08.308 23:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:08.308 [2024-05-14 23:03:20.561646] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:08.308 [2024-05-14 23:03:20.561697] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:08.308 [2024-05-14 23:03:20.561721] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:08.308 [2024-05-14 23:03:20.561739] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:08.308 [2024-05-14 23:03:20.561749] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:08.308 23:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.308 [2024-05-14 23:03:20.568624] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19bd050 was disconnected and freed. delete nvme_qpair. 00:16:08.308 23:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:08.308 23:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:09.243 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:09.243 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:09.243 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:09.243 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.243 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:09.243 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:09.243 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:09.243 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 84249 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 84249 ']' 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 84249 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84249 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84249' 00:16:09.502 killing process with pid 84249 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 84249 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 84249 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.502 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.761 rmmod nvme_tcp 00:16:09.761 rmmod nvme_fabrics 00:16:09.761 rmmod nvme_keyring 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 84218 ']' 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 84218 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 84218 ']' 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 84218 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:09.761 23:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84218 00:16:09.761 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:09.761 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:09.761 killing process with pid 84218 00:16:09.761 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84218' 00:16:09.761 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 84218 00:16:09.761 [2024-05-14 23:03:22.010876] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:09.761 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 84218 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:10.020 00:16:10.020 real 0m14.500s 00:16:10.020 user 0m25.525s 00:16:10.020 sys 0m1.506s 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:10.020 ************************************ 00:16:10.020 END TEST nvmf_discovery_remove_ifc 00:16:10.020 ************************************ 00:16:10.020 23:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:10.020 23:03:22 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:10.020 23:03:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:10.020 23:03:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:10.020 23:03:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:10.020 ************************************ 00:16:10.020 START TEST nvmf_identify_kernel_target 00:16:10.020 ************************************ 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:10.020 * Looking for test storage... 00:16:10.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.020 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:10.021 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:10.279 Cannot find device "nvmf_tgt_br" 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:10.279 Cannot find device "nvmf_tgt_br2" 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:10.279 Cannot find device "nvmf_tgt_br" 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:10.279 Cannot find device "nvmf_tgt_br2" 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:10.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:10.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:10.279 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:10.280 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:10.280 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:10.280 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:10.538 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:10.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:16:10.539 00:16:10.539 --- 10.0.0.2 ping statistics --- 00:16:10.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.539 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:10.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:10.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:10.539 00:16:10.539 --- 10.0.0.3 ping statistics --- 00:16:10.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.539 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:10.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:10.539 00:16:10.539 --- 10.0.0.1 ping statistics --- 00:16:10.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.539 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:10.539 23:03:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:10.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:10.798 Waiting for block devices as requested 00:16:11.061 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:11.061 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:11.061 No valid GPT data, bailing 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:11.061 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:11.319 No valid GPT data, bailing 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:11.319 No valid GPT data, bailing 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:11.319 No valid GPT data, bailing 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:11.319 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -a 10.0.0.1 -t tcp -s 4420 00:16:11.578 00:16:11.578 Discovery Log Number of Records 2, Generation counter 2 00:16:11.578 =====Discovery Log Entry 0====== 00:16:11.578 trtype: tcp 00:16:11.578 adrfam: ipv4 00:16:11.578 subtype: current discovery subsystem 00:16:11.578 treq: not specified, sq flow control disable supported 00:16:11.578 portid: 1 00:16:11.578 trsvcid: 4420 00:16:11.578 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:11.578 traddr: 10.0.0.1 00:16:11.578 eflags: none 00:16:11.578 sectype: none 00:16:11.578 =====Discovery Log Entry 1====== 00:16:11.578 trtype: tcp 00:16:11.578 adrfam: ipv4 00:16:11.578 subtype: nvme subsystem 00:16:11.578 treq: not specified, sq flow control disable supported 00:16:11.578 portid: 1 00:16:11.578 trsvcid: 4420 00:16:11.578 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:11.578 traddr: 10.0.0.1 00:16:11.578 eflags: none 00:16:11.578 sectype: none 00:16:11.578 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:11.578 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:11.578 ===================================================== 00:16:11.578 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:11.578 ===================================================== 00:16:11.578 Controller Capabilities/Features 00:16:11.578 ================================ 00:16:11.578 Vendor ID: 0000 00:16:11.578 Subsystem Vendor ID: 0000 00:16:11.578 Serial Number: 81d7d5892526348adef9 00:16:11.578 Model Number: Linux 00:16:11.578 Firmware Version: 6.7.0-68 00:16:11.578 Recommended Arb Burst: 0 00:16:11.578 IEEE OUI Identifier: 00 00 00 00:16:11.578 Multi-path I/O 00:16:11.578 May have multiple subsystem ports: No 00:16:11.578 May have multiple controllers: No 00:16:11.578 Associated with SR-IOV VF: No 00:16:11.578 Max Data Transfer Size: Unlimited 00:16:11.578 Max Number of Namespaces: 0 00:16:11.578 Max Number of I/O Queues: 1024 00:16:11.578 NVMe Specification Version (VS): 1.3 00:16:11.578 NVMe Specification Version (Identify): 1.3 00:16:11.578 Maximum Queue Entries: 1024 00:16:11.578 Contiguous Queues Required: No 00:16:11.578 Arbitration Mechanisms Supported 00:16:11.578 Weighted Round Robin: Not Supported 00:16:11.578 Vendor Specific: Not Supported 00:16:11.578 Reset Timeout: 7500 ms 00:16:11.578 Doorbell Stride: 4 bytes 00:16:11.578 NVM Subsystem Reset: Not Supported 00:16:11.578 Command Sets Supported 00:16:11.578 NVM Command Set: Supported 00:16:11.578 Boot Partition: Not Supported 00:16:11.578 Memory Page Size Minimum: 4096 bytes 00:16:11.578 Memory Page Size Maximum: 4096 bytes 00:16:11.578 Persistent Memory Region: Not Supported 00:16:11.578 Optional Asynchronous Events Supported 00:16:11.578 Namespace Attribute Notices: Not Supported 00:16:11.578 Firmware Activation Notices: Not Supported 00:16:11.578 ANA Change Notices: Not Supported 00:16:11.578 PLE Aggregate Log Change Notices: Not Supported 00:16:11.578 LBA Status Info Alert Notices: Not Supported 00:16:11.578 EGE Aggregate Log Change Notices: Not Supported 00:16:11.578 Normal NVM Subsystem Shutdown event: Not Supported 00:16:11.578 Zone Descriptor Change Notices: Not Supported 00:16:11.578 Discovery Log Change Notices: Supported 00:16:11.578 Controller Attributes 00:16:11.578 128-bit Host Identifier: Not Supported 00:16:11.578 Non-Operational Permissive Mode: Not Supported 00:16:11.578 NVM Sets: Not Supported 00:16:11.578 Read Recovery Levels: Not Supported 00:16:11.578 Endurance Groups: Not Supported 00:16:11.578 Predictable Latency Mode: Not Supported 00:16:11.578 Traffic Based Keep ALive: Not Supported 00:16:11.578 Namespace Granularity: Not Supported 00:16:11.578 SQ Associations: Not Supported 00:16:11.578 UUID List: Not Supported 00:16:11.578 Multi-Domain Subsystem: Not Supported 00:16:11.578 Fixed Capacity Management: Not Supported 00:16:11.578 Variable Capacity Management: Not Supported 00:16:11.578 Delete Endurance Group: Not Supported 00:16:11.578 Delete NVM Set: Not Supported 00:16:11.578 Extended LBA Formats Supported: Not Supported 00:16:11.578 Flexible Data Placement Supported: Not Supported 00:16:11.578 00:16:11.578 Controller Memory Buffer Support 00:16:11.578 ================================ 00:16:11.578 Supported: No 00:16:11.578 00:16:11.578 Persistent Memory Region Support 00:16:11.578 ================================ 00:16:11.578 Supported: No 00:16:11.578 00:16:11.578 Admin Command Set Attributes 00:16:11.578 ============================ 00:16:11.578 Security Send/Receive: Not Supported 00:16:11.578 Format NVM: Not Supported 00:16:11.578 Firmware Activate/Download: Not Supported 00:16:11.578 Namespace Management: Not Supported 00:16:11.578 Device Self-Test: Not Supported 00:16:11.578 Directives: Not Supported 00:16:11.578 NVMe-MI: Not Supported 00:16:11.578 Virtualization Management: Not Supported 00:16:11.578 Doorbell Buffer Config: Not Supported 00:16:11.578 Get LBA Status Capability: Not Supported 00:16:11.578 Command & Feature Lockdown Capability: Not Supported 00:16:11.578 Abort Command Limit: 1 00:16:11.578 Async Event Request Limit: 1 00:16:11.578 Number of Firmware Slots: N/A 00:16:11.578 Firmware Slot 1 Read-Only: N/A 00:16:11.578 Firmware Activation Without Reset: N/A 00:16:11.578 Multiple Update Detection Support: N/A 00:16:11.578 Firmware Update Granularity: No Information Provided 00:16:11.578 Per-Namespace SMART Log: No 00:16:11.578 Asymmetric Namespace Access Log Page: Not Supported 00:16:11.579 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:11.579 Command Effects Log Page: Not Supported 00:16:11.579 Get Log Page Extended Data: Supported 00:16:11.579 Telemetry Log Pages: Not Supported 00:16:11.579 Persistent Event Log Pages: Not Supported 00:16:11.579 Supported Log Pages Log Page: May Support 00:16:11.579 Commands Supported & Effects Log Page: Not Supported 00:16:11.579 Feature Identifiers & Effects Log Page:May Support 00:16:11.579 NVMe-MI Commands & Effects Log Page: May Support 00:16:11.579 Data Area 4 for Telemetry Log: Not Supported 00:16:11.579 Error Log Page Entries Supported: 1 00:16:11.579 Keep Alive: Not Supported 00:16:11.579 00:16:11.579 NVM Command Set Attributes 00:16:11.579 ========================== 00:16:11.579 Submission Queue Entry Size 00:16:11.579 Max: 1 00:16:11.579 Min: 1 00:16:11.579 Completion Queue Entry Size 00:16:11.579 Max: 1 00:16:11.579 Min: 1 00:16:11.579 Number of Namespaces: 0 00:16:11.579 Compare Command: Not Supported 00:16:11.579 Write Uncorrectable Command: Not Supported 00:16:11.579 Dataset Management Command: Not Supported 00:16:11.579 Write Zeroes Command: Not Supported 00:16:11.579 Set Features Save Field: Not Supported 00:16:11.579 Reservations: Not Supported 00:16:11.579 Timestamp: Not Supported 00:16:11.579 Copy: Not Supported 00:16:11.579 Volatile Write Cache: Not Present 00:16:11.579 Atomic Write Unit (Normal): 1 00:16:11.579 Atomic Write Unit (PFail): 1 00:16:11.579 Atomic Compare & Write Unit: 1 00:16:11.579 Fused Compare & Write: Not Supported 00:16:11.579 Scatter-Gather List 00:16:11.579 SGL Command Set: Supported 00:16:11.579 SGL Keyed: Not Supported 00:16:11.579 SGL Bit Bucket Descriptor: Not Supported 00:16:11.579 SGL Metadata Pointer: Not Supported 00:16:11.579 Oversized SGL: Not Supported 00:16:11.579 SGL Metadata Address: Not Supported 00:16:11.579 SGL Offset: Supported 00:16:11.579 Transport SGL Data Block: Not Supported 00:16:11.579 Replay Protected Memory Block: Not Supported 00:16:11.579 00:16:11.579 Firmware Slot Information 00:16:11.579 ========================= 00:16:11.579 Active slot: 0 00:16:11.579 00:16:11.579 00:16:11.579 Error Log 00:16:11.579 ========= 00:16:11.579 00:16:11.579 Active Namespaces 00:16:11.579 ================= 00:16:11.579 Discovery Log Page 00:16:11.579 ================== 00:16:11.579 Generation Counter: 2 00:16:11.579 Number of Records: 2 00:16:11.579 Record Format: 0 00:16:11.579 00:16:11.579 Discovery Log Entry 0 00:16:11.579 ---------------------- 00:16:11.579 Transport Type: 3 (TCP) 00:16:11.579 Address Family: 1 (IPv4) 00:16:11.579 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:11.579 Entry Flags: 00:16:11.579 Duplicate Returned Information: 0 00:16:11.579 Explicit Persistent Connection Support for Discovery: 0 00:16:11.579 Transport Requirements: 00:16:11.579 Secure Channel: Not Specified 00:16:11.579 Port ID: 1 (0x0001) 00:16:11.579 Controller ID: 65535 (0xffff) 00:16:11.579 Admin Max SQ Size: 32 00:16:11.579 Transport Service Identifier: 4420 00:16:11.579 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:11.579 Transport Address: 10.0.0.1 00:16:11.579 Discovery Log Entry 1 00:16:11.579 ---------------------- 00:16:11.579 Transport Type: 3 (TCP) 00:16:11.579 Address Family: 1 (IPv4) 00:16:11.579 Subsystem Type: 2 (NVM Subsystem) 00:16:11.579 Entry Flags: 00:16:11.579 Duplicate Returned Information: 0 00:16:11.579 Explicit Persistent Connection Support for Discovery: 0 00:16:11.579 Transport Requirements: 00:16:11.579 Secure Channel: Not Specified 00:16:11.579 Port ID: 1 (0x0001) 00:16:11.579 Controller ID: 65535 (0xffff) 00:16:11.579 Admin Max SQ Size: 32 00:16:11.579 Transport Service Identifier: 4420 00:16:11.579 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:11.579 Transport Address: 10.0.0.1 00:16:11.579 23:03:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:11.837 get_feature(0x01) failed 00:16:11.837 get_feature(0x02) failed 00:16:11.837 get_feature(0x04) failed 00:16:11.837 ===================================================== 00:16:11.837 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:11.837 ===================================================== 00:16:11.837 Controller Capabilities/Features 00:16:11.838 ================================ 00:16:11.838 Vendor ID: 0000 00:16:11.838 Subsystem Vendor ID: 0000 00:16:11.838 Serial Number: a63f4087c3b4ae8c2d20 00:16:11.838 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:11.838 Firmware Version: 6.7.0-68 00:16:11.838 Recommended Arb Burst: 6 00:16:11.838 IEEE OUI Identifier: 00 00 00 00:16:11.838 Multi-path I/O 00:16:11.838 May have multiple subsystem ports: Yes 00:16:11.838 May have multiple controllers: Yes 00:16:11.838 Associated with SR-IOV VF: No 00:16:11.838 Max Data Transfer Size: Unlimited 00:16:11.838 Max Number of Namespaces: 1024 00:16:11.838 Max Number of I/O Queues: 128 00:16:11.838 NVMe Specification Version (VS): 1.3 00:16:11.838 NVMe Specification Version (Identify): 1.3 00:16:11.838 Maximum Queue Entries: 1024 00:16:11.838 Contiguous Queues Required: No 00:16:11.838 Arbitration Mechanisms Supported 00:16:11.838 Weighted Round Robin: Not Supported 00:16:11.838 Vendor Specific: Not Supported 00:16:11.838 Reset Timeout: 7500 ms 00:16:11.838 Doorbell Stride: 4 bytes 00:16:11.838 NVM Subsystem Reset: Not Supported 00:16:11.838 Command Sets Supported 00:16:11.838 NVM Command Set: Supported 00:16:11.838 Boot Partition: Not Supported 00:16:11.838 Memory Page Size Minimum: 4096 bytes 00:16:11.838 Memory Page Size Maximum: 4096 bytes 00:16:11.838 Persistent Memory Region: Not Supported 00:16:11.838 Optional Asynchronous Events Supported 00:16:11.838 Namespace Attribute Notices: Supported 00:16:11.838 Firmware Activation Notices: Not Supported 00:16:11.838 ANA Change Notices: Supported 00:16:11.838 PLE Aggregate Log Change Notices: Not Supported 00:16:11.838 LBA Status Info Alert Notices: Not Supported 00:16:11.838 EGE Aggregate Log Change Notices: Not Supported 00:16:11.838 Normal NVM Subsystem Shutdown event: Not Supported 00:16:11.838 Zone Descriptor Change Notices: Not Supported 00:16:11.838 Discovery Log Change Notices: Not Supported 00:16:11.838 Controller Attributes 00:16:11.838 128-bit Host Identifier: Supported 00:16:11.838 Non-Operational Permissive Mode: Not Supported 00:16:11.838 NVM Sets: Not Supported 00:16:11.838 Read Recovery Levels: Not Supported 00:16:11.838 Endurance Groups: Not Supported 00:16:11.838 Predictable Latency Mode: Not Supported 00:16:11.838 Traffic Based Keep ALive: Supported 00:16:11.838 Namespace Granularity: Not Supported 00:16:11.838 SQ Associations: Not Supported 00:16:11.838 UUID List: Not Supported 00:16:11.838 Multi-Domain Subsystem: Not Supported 00:16:11.838 Fixed Capacity Management: Not Supported 00:16:11.838 Variable Capacity Management: Not Supported 00:16:11.838 Delete Endurance Group: Not Supported 00:16:11.838 Delete NVM Set: Not Supported 00:16:11.838 Extended LBA Formats Supported: Not Supported 00:16:11.838 Flexible Data Placement Supported: Not Supported 00:16:11.838 00:16:11.838 Controller Memory Buffer Support 00:16:11.838 ================================ 00:16:11.838 Supported: No 00:16:11.838 00:16:11.838 Persistent Memory Region Support 00:16:11.838 ================================ 00:16:11.838 Supported: No 00:16:11.838 00:16:11.838 Admin Command Set Attributes 00:16:11.838 ============================ 00:16:11.838 Security Send/Receive: Not Supported 00:16:11.838 Format NVM: Not Supported 00:16:11.838 Firmware Activate/Download: Not Supported 00:16:11.838 Namespace Management: Not Supported 00:16:11.838 Device Self-Test: Not Supported 00:16:11.838 Directives: Not Supported 00:16:11.838 NVMe-MI: Not Supported 00:16:11.838 Virtualization Management: Not Supported 00:16:11.838 Doorbell Buffer Config: Not Supported 00:16:11.838 Get LBA Status Capability: Not Supported 00:16:11.838 Command & Feature Lockdown Capability: Not Supported 00:16:11.838 Abort Command Limit: 4 00:16:11.838 Async Event Request Limit: 4 00:16:11.838 Number of Firmware Slots: N/A 00:16:11.838 Firmware Slot 1 Read-Only: N/A 00:16:11.838 Firmware Activation Without Reset: N/A 00:16:11.838 Multiple Update Detection Support: N/A 00:16:11.838 Firmware Update Granularity: No Information Provided 00:16:11.838 Per-Namespace SMART Log: Yes 00:16:11.838 Asymmetric Namespace Access Log Page: Supported 00:16:11.838 ANA Transition Time : 10 sec 00:16:11.838 00:16:11.838 Asymmetric Namespace Access Capabilities 00:16:11.838 ANA Optimized State : Supported 00:16:11.838 ANA Non-Optimized State : Supported 00:16:11.838 ANA Inaccessible State : Supported 00:16:11.838 ANA Persistent Loss State : Supported 00:16:11.838 ANA Change State : Supported 00:16:11.838 ANAGRPID is not changed : No 00:16:11.838 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:11.838 00:16:11.838 ANA Group Identifier Maximum : 128 00:16:11.838 Number of ANA Group Identifiers : 128 00:16:11.838 Max Number of Allowed Namespaces : 1024 00:16:11.838 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:11.838 Command Effects Log Page: Supported 00:16:11.838 Get Log Page Extended Data: Supported 00:16:11.838 Telemetry Log Pages: Not Supported 00:16:11.838 Persistent Event Log Pages: Not Supported 00:16:11.838 Supported Log Pages Log Page: May Support 00:16:11.838 Commands Supported & Effects Log Page: Not Supported 00:16:11.838 Feature Identifiers & Effects Log Page:May Support 00:16:11.838 NVMe-MI Commands & Effects Log Page: May Support 00:16:11.838 Data Area 4 for Telemetry Log: Not Supported 00:16:11.838 Error Log Page Entries Supported: 128 00:16:11.838 Keep Alive: Supported 00:16:11.838 Keep Alive Granularity: 1000 ms 00:16:11.838 00:16:11.838 NVM Command Set Attributes 00:16:11.838 ========================== 00:16:11.838 Submission Queue Entry Size 00:16:11.838 Max: 64 00:16:11.838 Min: 64 00:16:11.838 Completion Queue Entry Size 00:16:11.838 Max: 16 00:16:11.838 Min: 16 00:16:11.838 Number of Namespaces: 1024 00:16:11.838 Compare Command: Not Supported 00:16:11.838 Write Uncorrectable Command: Not Supported 00:16:11.838 Dataset Management Command: Supported 00:16:11.838 Write Zeroes Command: Supported 00:16:11.838 Set Features Save Field: Not Supported 00:16:11.838 Reservations: Not Supported 00:16:11.838 Timestamp: Not Supported 00:16:11.838 Copy: Not Supported 00:16:11.838 Volatile Write Cache: Present 00:16:11.838 Atomic Write Unit (Normal): 1 00:16:11.838 Atomic Write Unit (PFail): 1 00:16:11.838 Atomic Compare & Write Unit: 1 00:16:11.838 Fused Compare & Write: Not Supported 00:16:11.838 Scatter-Gather List 00:16:11.838 SGL Command Set: Supported 00:16:11.838 SGL Keyed: Not Supported 00:16:11.838 SGL Bit Bucket Descriptor: Not Supported 00:16:11.838 SGL Metadata Pointer: Not Supported 00:16:11.838 Oversized SGL: Not Supported 00:16:11.838 SGL Metadata Address: Not Supported 00:16:11.838 SGL Offset: Supported 00:16:11.838 Transport SGL Data Block: Not Supported 00:16:11.838 Replay Protected Memory Block: Not Supported 00:16:11.838 00:16:11.838 Firmware Slot Information 00:16:11.838 ========================= 00:16:11.838 Active slot: 0 00:16:11.838 00:16:11.838 Asymmetric Namespace Access 00:16:11.838 =========================== 00:16:11.838 Change Count : 0 00:16:11.838 Number of ANA Group Descriptors : 1 00:16:11.838 ANA Group Descriptor : 0 00:16:11.838 ANA Group ID : 1 00:16:11.838 Number of NSID Values : 1 00:16:11.838 Change Count : 0 00:16:11.838 ANA State : 1 00:16:11.838 Namespace Identifier : 1 00:16:11.838 00:16:11.838 Commands Supported and Effects 00:16:11.838 ============================== 00:16:11.838 Admin Commands 00:16:11.838 -------------- 00:16:11.838 Get Log Page (02h): Supported 00:16:11.838 Identify (06h): Supported 00:16:11.838 Abort (08h): Supported 00:16:11.838 Set Features (09h): Supported 00:16:11.838 Get Features (0Ah): Supported 00:16:11.838 Asynchronous Event Request (0Ch): Supported 00:16:11.838 Keep Alive (18h): Supported 00:16:11.838 I/O Commands 00:16:11.838 ------------ 00:16:11.838 Flush (00h): Supported 00:16:11.838 Write (01h): Supported LBA-Change 00:16:11.838 Read (02h): Supported 00:16:11.838 Write Zeroes (08h): Supported LBA-Change 00:16:11.838 Dataset Management (09h): Supported 00:16:11.838 00:16:11.838 Error Log 00:16:11.838 ========= 00:16:11.838 Entry: 0 00:16:11.838 Error Count: 0x3 00:16:11.838 Submission Queue Id: 0x0 00:16:11.838 Command Id: 0x5 00:16:11.838 Phase Bit: 0 00:16:11.838 Status Code: 0x2 00:16:11.838 Status Code Type: 0x0 00:16:11.838 Do Not Retry: 1 00:16:11.838 Error Location: 0x28 00:16:11.838 LBA: 0x0 00:16:11.838 Namespace: 0x0 00:16:11.838 Vendor Log Page: 0x0 00:16:11.838 ----------- 00:16:11.838 Entry: 1 00:16:11.838 Error Count: 0x2 00:16:11.838 Submission Queue Id: 0x0 00:16:11.838 Command Id: 0x5 00:16:11.838 Phase Bit: 0 00:16:11.838 Status Code: 0x2 00:16:11.838 Status Code Type: 0x0 00:16:11.838 Do Not Retry: 1 00:16:11.838 Error Location: 0x28 00:16:11.838 LBA: 0x0 00:16:11.838 Namespace: 0x0 00:16:11.838 Vendor Log Page: 0x0 00:16:11.838 ----------- 00:16:11.839 Entry: 2 00:16:11.839 Error Count: 0x1 00:16:11.839 Submission Queue Id: 0x0 00:16:11.839 Command Id: 0x4 00:16:11.839 Phase Bit: 0 00:16:11.839 Status Code: 0x2 00:16:11.839 Status Code Type: 0x0 00:16:11.839 Do Not Retry: 1 00:16:11.839 Error Location: 0x28 00:16:11.839 LBA: 0x0 00:16:11.839 Namespace: 0x0 00:16:11.839 Vendor Log Page: 0x0 00:16:11.839 00:16:11.839 Number of Queues 00:16:11.839 ================ 00:16:11.839 Number of I/O Submission Queues: 128 00:16:11.839 Number of I/O Completion Queues: 128 00:16:11.839 00:16:11.839 ZNS Specific Controller Data 00:16:11.839 ============================ 00:16:11.839 Zone Append Size Limit: 0 00:16:11.839 00:16:11.839 00:16:11.839 Active Namespaces 00:16:11.839 ================= 00:16:11.839 get_feature(0x05) failed 00:16:11.839 Namespace ID:1 00:16:11.839 Command Set Identifier: NVM (00h) 00:16:11.839 Deallocate: Supported 00:16:11.839 Deallocated/Unwritten Error: Not Supported 00:16:11.839 Deallocated Read Value: Unknown 00:16:11.839 Deallocate in Write Zeroes: Not Supported 00:16:11.839 Deallocated Guard Field: 0xFFFF 00:16:11.839 Flush: Supported 00:16:11.839 Reservation: Not Supported 00:16:11.839 Namespace Sharing Capabilities: Multiple Controllers 00:16:11.839 Size (in LBAs): 1310720 (5GiB) 00:16:11.839 Capacity (in LBAs): 1310720 (5GiB) 00:16:11.839 Utilization (in LBAs): 1310720 (5GiB) 00:16:11.839 UUID: 9cd6ddc5-3df8-4916-834a-9d1947ce25b5 00:16:11.839 Thin Provisioning: Not Supported 00:16:11.839 Per-NS Atomic Units: Yes 00:16:11.839 Atomic Boundary Size (Normal): 0 00:16:11.839 Atomic Boundary Size (PFail): 0 00:16:11.839 Atomic Boundary Offset: 0 00:16:11.839 NGUID/EUI64 Never Reused: No 00:16:11.839 ANA group ID: 1 00:16:11.839 Namespace Write Protected: No 00:16:11.839 Number of LBA Formats: 1 00:16:11.839 Current LBA Format: LBA Format #00 00:16:11.839 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:11.839 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.839 rmmod nvme_tcp 00:16:11.839 rmmod nvme_fabrics 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:11.839 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:12.097 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:12.097 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:12.097 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:12.097 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:12.097 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:12.097 23:03:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:12.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:12.663 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:12.663 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:12.922 00:16:12.922 real 0m2.786s 00:16:12.922 user 0m0.989s 00:16:12.922 sys 0m1.282s 00:16:12.922 23:03:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:12.922 23:03:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.922 ************************************ 00:16:12.922 END TEST nvmf_identify_kernel_target 00:16:12.922 ************************************ 00:16:12.922 23:03:25 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:12.922 23:03:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:12.922 23:03:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:12.922 23:03:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.922 ************************************ 00:16:12.922 START TEST nvmf_auth 00:16:12.922 ************************************ 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:12.922 * Looking for test storage... 00:16:12.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:12.922 Cannot find device "nvmf_tgt_br" 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@155 -- # true 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.922 Cannot find device "nvmf_tgt_br2" 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@156 -- # true 00:16:12.922 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:12.923 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:12.923 Cannot find device "nvmf_tgt_br" 00:16:12.923 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@158 -- # true 00:16:12.923 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:12.923 Cannot find device "nvmf_tgt_br2" 00:16:12.923 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@159 -- # true 00:16:12.923 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@162 -- # true 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@163 -- # true 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:13.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:13.182 00:16:13.182 --- 10.0.0.2 ping statistics --- 00:16:13.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.182 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:13.182 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.182 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:16:13.182 00:16:13.182 --- 10.0.0.3 ping statistics --- 00:16:13.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.182 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:13.182 00:16:13.182 --- 10.0.0.1 ping statistics --- 00:16:13.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.182 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@433 -- # return 0 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.182 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=85169 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 85169 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 85169 ']' 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:13.441 23:03:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=196edbe2b18a229f0c78f0631bc38a45 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.TMn 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 196edbe2b18a229f0c78f0631bc38a45 0 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 196edbe2b18a229f0c78f0631bc38a45 0 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=196edbe2b18a229f0c78f0631bc38a45 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.TMn 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.TMn 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.TMn 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=5d3a33966661f16b1d86a91cb4f3244355eae067cce991bd71ad7f54035ac1dc 00:16:14.376 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.C1u 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 5d3a33966661f16b1d86a91cb4f3244355eae067cce991bd71ad7f54035ac1dc 3 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 5d3a33966661f16b1d86a91cb4f3244355eae067cce991bd71ad7f54035ac1dc 3 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=5d3a33966661f16b1d86a91cb4f3244355eae067cce991bd71ad7f54035ac1dc 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.C1u 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.C1u 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.C1u 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=85cd2b2d7275029df3286cf08fc02a1cdc4a0063a7385fa4 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.87K 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 85cd2b2d7275029df3286cf08fc02a1cdc4a0063a7385fa4 0 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 85cd2b2d7275029df3286cf08fc02a1cdc4a0063a7385fa4 0 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=85cd2b2d7275029df3286cf08fc02a1cdc4a0063a7385fa4 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.87K 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.87K 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.87K 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=5f4f9cb8d703e810fe03d57065be4b9b744158721f295112 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.nXJ 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 5f4f9cb8d703e810fe03d57065be4b9b744158721f295112 2 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 5f4f9cb8d703e810fe03d57065be4b9b744158721f295112 2 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=5f4f9cb8d703e810fe03d57065be4b9b744158721f295112 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.nXJ 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.nXJ 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.nXJ 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=12263656dbe1b9a6849db9f1f40da930 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:16:14.635 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.epC 00:16:14.636 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 12263656dbe1b9a6849db9f1f40da930 1 00:16:14.636 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 12263656dbe1b9a6849db9f1f40da930 1 00:16:14.636 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.636 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.636 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=12263656dbe1b9a6849db9f1f40da930 00:16:14.636 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:16:14.636 23:03:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:16:14.636 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.epC 00:16:14.636 23:03:26 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.epC 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.epC 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=e879d44fb3bc6aadff01b383314c0cc6 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.5kN 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key e879d44fb3bc6aadff01b383314c0cc6 1 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 e879d44fb3bc6aadff01b383314c0cc6 1 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=e879d44fb3bc6aadff01b383314c0cc6 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:16:14.636 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.5kN 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.5kN 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.5kN 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=412730d1c4be3782dff6d8193d3a84aad0e526c1b0fa724a 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.ax3 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 412730d1c4be3782dff6d8193d3a84aad0e526c1b0fa724a 2 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 412730d1c4be3782dff6d8193d3a84aad0e526c1b0fa724a 2 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=412730d1c4be3782dff6d8193d3a84aad0e526c1b0fa724a 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.ax3 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.ax3 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.ax3 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=d75c69a66175fcc7f6eafe2fe173933c 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.MRY 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key d75c69a66175fcc7f6eafe2fe173933c 0 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 d75c69a66175fcc7f6eafe2fe173933c 0 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=d75c69a66175fcc7f6eafe2fe173933c 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.MRY 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.MRY 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.MRY 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=ee2f190f76cd9617f9b3567570cf0d9076a300c03641bf570ff9cfcfb6308f6d 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.tyN 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key ee2f190f76cd9617f9b3567570cf0d9076a300c03641bf570ff9cfcfb6308f6d 3 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 ee2f190f76cd9617f9b3567570cf0d9076a300c03641bf570ff9cfcfb6308f6d 3 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=ee2f190f76cd9617f9b3567570cf0d9076a300c03641bf570ff9cfcfb6308f6d 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.tyN 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.tyN 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.tyN 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 85169 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 85169 ']' 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:14.895 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TMn 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.C1u ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.C1u 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.87K 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.nXJ ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nXJ 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.epC 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.5kN ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5kN 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ax3 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.MRY ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.MRY 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.tyN 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:15.466 23:03:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:15.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:15.756 Waiting for block devices as requested 00:16:15.756 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:16.063 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:16.629 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:16.629 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:16.629 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:16.629 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:16:16.629 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:16.629 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:16.629 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:16.629 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:16.629 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:16.629 No valid GPT data, bailing 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:16.630 No valid GPT data, bailing 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:16.630 No valid GPT data, bailing 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:16.630 23:03:28 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:16.630 No valid GPT data, bailing 00:16:16.630 23:03:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -a 10.0.0.1 -t tcp -s 4420 00:16:16.890 00:16:16.890 Discovery Log Number of Records 2, Generation counter 2 00:16:16.890 =====Discovery Log Entry 0====== 00:16:16.890 trtype: tcp 00:16:16.890 adrfam: ipv4 00:16:16.890 subtype: current discovery subsystem 00:16:16.890 treq: not specified, sq flow control disable supported 00:16:16.890 portid: 1 00:16:16.890 trsvcid: 4420 00:16:16.890 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:16.890 traddr: 10.0.0.1 00:16:16.890 eflags: none 00:16:16.890 sectype: none 00:16:16.890 =====Discovery Log Entry 1====== 00:16:16.890 trtype: tcp 00:16:16.890 adrfam: ipv4 00:16:16.890 subtype: nvme subsystem 00:16:16.890 treq: not specified, sq flow control disable supported 00:16:16.890 portid: 1 00:16:16.890 trsvcid: 4420 00:16:16.890 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:16.890 traddr: 10.0.0.1 00:16:16.890 eflags: none 00:16:16.890 sectype: none 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.890 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.150 nvme0n1 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.150 nvme0n1 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:17.150 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.151 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.410 nvme0n1 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:16:17.410 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.411 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.671 nvme0n1 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.671 23:03:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.671 nvme0n1 00:16:17.671 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:17.930 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.931 nvme0n1 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.931 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:18.499 nvme0n1 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.499 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:18.758 nvme0n1 00:16:18.758 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.758 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.758 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.758 23:03:30 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:18.758 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:18.758 23:03:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.758 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.759 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.017 nvme0n1 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:19.017 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.018 nvme0n1 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.018 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.277 nvme0n1 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:19.277 23:03:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.845 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.104 nvme0n1 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.104 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.363 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.364 nvme0n1 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:20.364 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.623 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.624 23:03:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.624 nvme0n1 00:16:20.624 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.624 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.624 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:20.624 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.624 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.883 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:21.142 nvme0n1 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.142 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:21.433 nvme0n1 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:21.433 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:21.434 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:21.434 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:21.434 23:03:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.335 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:23.594 nvme0n1 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.594 23:03:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.163 nvme0n1 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.163 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.422 nvme0n1 00:16:24.422 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.422 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.422 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.422 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.422 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:24.422 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:24.681 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.682 23:03:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.941 nvme0n1 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.941 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.199 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:25.458 nvme0n1 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:25.458 23:03:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.692 23:03:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:29.951 nvme0n1 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:29.951 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.952 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:30.885 nvme0n1 00:16:30.885 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.885 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:30.885 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.885 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.885 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:30.885 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.885 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.885 23:03:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.885 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.885 23:03:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.885 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:31.456 nvme0n1 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:31.456 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.457 23:03:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:32.022 nvme0n1 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.022 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.280 23:03:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:32.847 nvme0n1 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:32.847 nvme0n1 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:32.847 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:33.106 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.107 nvme0n1 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.107 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.366 nvme0n1 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.366 nvme0n1 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.366 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.625 nvme0n1 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.625 23:03:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.883 nvme0n1 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.883 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.142 nvme0n1 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.142 nvme0n1 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.142 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:34.400 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.401 nvme0n1 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.401 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.659 nvme0n1 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.659 23:03:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.921 nvme0n1 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:34.921 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.922 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.180 nvme0n1 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.180 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.181 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.489 nvme0n1 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.489 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.748 nvme0n1 00:16:35.748 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.748 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.748 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:35.748 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.748 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.748 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.748 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.748 23:03:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.748 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.748 23:03:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.748 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.009 nvme0n1 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.009 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.267 nvme0n1 00:16:36.267 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.267 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:36.267 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.267 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:36.267 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.267 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.525 23:03:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.782 nvme0n1 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.782 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:37.347 nvme0n1 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.347 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:37.605 nvme0n1 00:16:37.605 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.605 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.605 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.605 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:37.605 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:37.605 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.605 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.605 23:03:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.605 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.605 23:03:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:37.862 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.863 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:38.122 nvme0n1 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.122 23:03:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:39.058 nvme0n1 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.058 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:39.625 nvme0n1 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.625 23:03:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:40.192 nvme0n1 00:16:40.192 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.192 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:40.192 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.192 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.192 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:40.192 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.451 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.452 23:03:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.019 nvme0n1 00:16:41.019 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.019 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:41.019 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.019 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.020 23:03:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.960 nvme0n1 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.960 nvme0n1 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:41.960 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.961 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.218 nvme0n1 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.218 nvme0n1 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:42.218 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:42.477 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.478 nvme0n1 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.478 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.797 nvme0n1 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.797 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.798 23:03:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.798 nvme0n1 00:16:42.798 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.798 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.798 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:42.798 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.798 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:42.798 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:43.071 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.072 nvme0n1 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.072 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.330 nvme0n1 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.330 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:43.331 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:43.331 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:43.331 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:43.331 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.331 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.331 nvme0n1 00:16:43.331 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.331 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.331 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.331 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.589 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:43.589 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.589 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.589 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.589 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.589 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.589 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.589 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.590 nvme0n1 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.590 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.849 23:03:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.849 nvme0n1 00:16:43.849 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.849 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.849 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:43.849 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.849 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:43.849 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.849 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.849 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.849 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.849 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.108 nvme0n1 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:44.108 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.367 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.367 nvme0n1 00:16:44.368 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.368 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.368 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.368 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.368 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:44.368 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:44.626 23:03:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:44.627 23:03:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:44.627 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.627 23:03:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.886 nvme0n1 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.886 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.145 nvme0n1 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.145 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.146 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.405 nvme0n1 00:16:45.405 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.405 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.405 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:45.405 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.405 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.405 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.405 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.405 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.405 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.405 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.663 23:03:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.921 nvme0n1 00:16:45.921 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.922 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:46.489 nvme0n1 00:16:46.489 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.490 23:03:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:46.748 nvme0n1 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:46.748 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.749 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:47.315 nvme0n1 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk2ZWRiZTJiMThhMjI5ZjBjNzhmMDYzMWJjMzhhNDV+nq1g: 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: ]] 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:NWQzYTMzOTY2NjYxZjE2YjFkODZhOTFjYjRmMzI0NDM1NWVhZTA2N2NjZTk5MWJkNzFhZDdmNTQwMzVhYzFkYynqRd8=: 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:47.315 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.316 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:47.316 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:47.316 23:03:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:47.316 23:03:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.316 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.316 23:03:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:47.882 nvme0n1 00:16:47.882 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.882 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.882 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.882 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:47.882 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:47.882 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.882 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.882 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.882 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.882 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.140 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:48.706 nvme0n1 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:MTIyNjM2NTZkYmUxYjlhNjg0OWRiOWYxZjQwZGE5MzA0JrKl: 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: ]] 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:ZTg3OWQ0NGZiM2JjNmFhZGZmMDFiMzgzMzE0YzBjYzYgNPiX: 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.706 23:04:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.706 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:49.282 nvme0n1 00:16:49.282 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.282 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.282 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:49.282 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.282 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:49.282 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:NDEyNzMwZDFjNGJlMzc4MmRmZjZkODE5M2QzYTg0YWFkMGU1MjZjMWIwZmE3MjRhszY7FQ==: 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: ]] 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:ZDc1YzY5YTY2MTc1ZmNjN2Y2ZWFmZTJmZTE3MzkzM2M5SJMl: 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.541 23:04:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.107 nvme0n1 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZWUyZjE5MGY3NmNkOTYxN2Y5YjM1Njc1NzBjZjBkOTA3NmEzMDBjMDM2NDFiZjU3MGZmOWNmY2ZiNjMwOGY2ZDEMWQw=: 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.107 23:04:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.675 nvme0n1 00:16:50.675 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.675 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.675 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.675 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:16:50.675 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.675 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ODVjZDJiMmQ3Mjc1MDI5ZGYzMjg2Y2YwOGZjMDJhMWNkYzRhMDA2M2E3Mzg1ZmE02ZQbaQ==: 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: ]] 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0ZjljYjhkNzAzZTgxMGZlMDNkNTcwNjViZTRiOWI3NDQxNTg3MjFmMjk1MTEyiOpt7A==: 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:50.934 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.935 2024/05/14 23:04:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:50.935 request: 00:16:50.935 { 00:16:50.935 "method": "bdev_nvme_attach_controller", 00:16:50.935 "params": { 00:16:50.935 "name": "nvme0", 00:16:50.935 "trtype": "tcp", 00:16:50.935 "traddr": "10.0.0.1", 00:16:50.935 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:50.935 "adrfam": "ipv4", 00:16:50.935 "trsvcid": "4420", 00:16:50.935 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:16:50.935 } 00:16:50.935 } 00:16:50.935 Got JSON-RPC error response 00:16:50.935 GoRPCClient: error on JSON-RPC call 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.935 2024/05/14 23:04:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:50.935 request: 00:16:50.935 { 00:16:50.935 "method": "bdev_nvme_attach_controller", 00:16:50.935 "params": { 00:16:50.935 "name": "nvme0", 00:16:50.935 "trtype": "tcp", 00:16:50.935 "traddr": "10.0.0.1", 00:16:50.935 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:50.935 "adrfam": "ipv4", 00:16:50.935 "trsvcid": "4420", 00:16:50.935 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:50.935 "dhchap_key": "key2" 00:16:50.935 } 00:16:50.935 } 00:16:50.935 Got JSON-RPC error response 00:16:50.935 GoRPCClient: error on JSON-RPC call 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.935 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:51.194 2024/05/14 23:04:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:51.194 request: 00:16:51.194 { 00:16:51.194 "method": "bdev_nvme_attach_controller", 00:16:51.194 "params": { 00:16:51.194 "name": "nvme0", 00:16:51.194 "trtype": "tcp", 00:16:51.194 "traddr": "10.0.0.1", 00:16:51.194 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:51.194 "adrfam": "ipv4", 00:16:51.194 "trsvcid": "4420", 00:16:51.194 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:51.194 "dhchap_key": "key1", 00:16:51.194 "dhchap_ctrlr_key": "ckey2" 00:16:51.194 } 00:16:51.194 } 00:16:51.194 Got JSON-RPC error response 00:16:51.194 GoRPCClient: error on JSON-RPC call 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.194 rmmod nvme_tcp 00:16:51.194 rmmod nvme_fabrics 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 85169 ']' 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 85169 00:16:51.194 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 85169 ']' 00:16:51.195 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 85169 00:16:51.195 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:16:51.195 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:51.195 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85169 00:16:51.195 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:51.195 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:51.195 killing process with pid 85169 00:16:51.195 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85169' 00:16:51.195 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 85169 00:16:51.195 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 85169 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:51.453 23:04:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:52.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.278 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:52.278 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:52.278 23:04:04 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.TMn /tmp/spdk.key-null.87K /tmp/spdk.key-sha256.epC /tmp/spdk.key-sha384.ax3 /tmp/spdk.key-sha512.tyN /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:52.278 23:04:04 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:52.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.537 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.537 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:52.537 00:16:52.537 real 0m39.794s 00:16:52.537 user 0m35.797s 00:16:52.537 sys 0m3.617s 00:16:52.537 23:04:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:52.537 ************************************ 00:16:52.537 23:04:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:16:52.537 END TEST nvmf_auth 00:16:52.537 ************************************ 00:16:52.796 23:04:04 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:16:52.796 23:04:04 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:52.796 23:04:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:52.796 23:04:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:52.796 23:04:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:52.796 ************************************ 00:16:52.796 START TEST nvmf_digest 00:16:52.796 ************************************ 00:16:52.796 23:04:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:52.796 * Looking for test storage... 00:16:52.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.796 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:52.797 Cannot find device "nvmf_tgt_br" 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:52.797 Cannot find device "nvmf_tgt_br2" 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:52.797 Cannot find device "nvmf_tgt_br" 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:52.797 Cannot find device "nvmf_tgt_br2" 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:52.797 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:53.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:53.055 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:53.055 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:53.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:16:53.056 00:16:53.056 --- 10.0.0.2 ping statistics --- 00:16:53.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.056 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:53.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:53.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:53.056 00:16:53.056 --- 10.0.0.3 ping statistics --- 00:16:53.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.056 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:53.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:53.056 00:16:53.056 --- 10.0.0.1 ping statistics --- 00:16:53.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.056 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.056 23:04:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:53.315 ************************************ 00:16:53.315 START TEST nvmf_digest_clean 00:16:53.315 ************************************ 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=86825 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 86825 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 86825 ']' 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:53.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:53.315 23:04:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:53.315 [2024-05-14 23:04:05.530703] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:16:53.315 [2024-05-14 23:04:05.530823] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.315 [2024-05-14 23:04:05.669537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.574 [2024-05-14 23:04:05.744839] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.574 [2024-05-14 23:04:05.744894] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.574 [2024-05-14 23:04:05.744908] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.574 [2024-05-14 23:04:05.744918] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.574 [2024-05-14 23:04:05.744926] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.574 [2024-05-14 23:04:05.744957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:54.508 null0 00:16:54.508 [2024-05-14 23:04:06.657690] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.508 [2024-05-14 23:04:06.681623] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:54.508 [2024-05-14 23:04:06.681854] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86875 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86875 /var/tmp/bperf.sock 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 86875 ']' 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:54.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:54.508 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:54.508 [2024-05-14 23:04:06.743781] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:16:54.508 [2024-05-14 23:04:06.743873] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86875 ] 00:16:54.508 [2024-05-14 23:04:06.881221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.766 [2024-05-14 23:04:06.941658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.766 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:54.766 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:16:54.766 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:54.766 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:54.766 23:04:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:55.024 23:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:55.024 23:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:55.283 nvme0n1 00:16:55.283 23:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:55.283 23:04:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:55.542 Running I/O for 2 seconds... 00:16:57.445 00:16:57.446 Latency(us) 00:16:57.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.446 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:57.446 nvme0n1 : 2.00 17825.42 69.63 0.00 0.00 7171.41 3902.37 15966.95 00:16:57.446 =================================================================================================================== 00:16:57.446 Total : 17825.42 69.63 0.00 0.00 7171.41 3902.37 15966.95 00:16:57.446 0 00:16:57.446 23:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:57.446 23:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:57.446 23:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:57.446 23:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:57.446 23:04:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:57.446 | select(.opcode=="crc32c") 00:16:57.446 | "\(.module_name) \(.executed)"' 00:16:57.705 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:57.705 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:57.705 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:57.705 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:57.705 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86875 00:16:57.705 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 86875 ']' 00:16:57.705 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 86875 00:16:57.705 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:16:57.705 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:57.705 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86875 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:57.964 killing process with pid 86875 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86875' 00:16:57.964 Received shutdown signal, test time was about 2.000000 seconds 00:16:57.964 00:16:57.964 Latency(us) 00:16:57.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.964 =================================================================================================================== 00:16:57.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 86875 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 86875 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86946 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86946 /var/tmp/bperf.sock 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 86946 ']' 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:57.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:57.964 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:57.964 [2024-05-14 23:04:10.352709] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:16:57.964 [2024-05-14 23:04:10.352803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86946 ] 00:16:57.964 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:57.965 Zero copy mechanism will not be used. 00:16:58.223 [2024-05-14 23:04:10.488513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.224 [2024-05-14 23:04:10.548783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.224 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:58.224 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:16:58.224 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:58.224 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:58.224 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:58.791 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.791 23:04:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:59.050 nvme0n1 00:16:59.050 23:04:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:59.050 23:04:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:59.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:59.050 Zero copy mechanism will not be used. 00:16:59.050 Running I/O for 2 seconds... 00:17:00.996 00:17:00.996 Latency(us) 00:17:00.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.996 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:00.996 nvme0n1 : 2.00 7440.24 930.03 0.00 0.00 2146.12 692.60 10485.76 00:17:00.996 =================================================================================================================== 00:17:00.996 Total : 7440.24 930.03 0.00 0.00 2146.12 692.60 10485.76 00:17:00.996 0 00:17:01.255 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:01.255 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:01.255 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:01.255 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:01.255 | select(.opcode=="crc32c") 00:17:01.255 | "\(.module_name) \(.executed)"' 00:17:01.255 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:01.514 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86946 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 86946 ']' 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 86946 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86946 00:17:01.515 killing process with pid 86946 00:17:01.515 Received shutdown signal, test time was about 2.000000 seconds 00:17:01.515 00:17:01.515 Latency(us) 00:17:01.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.515 =================================================================================================================== 00:17:01.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86946' 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 86946 00:17:01.515 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 86946 00:17:01.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87027 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87027 /var/tmp/bperf.sock 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 87027 ']' 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:01.774 23:04:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:01.774 [2024-05-14 23:04:13.955668] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:01.774 [2024-05-14 23:04:13.955927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87027 ] 00:17:01.774 [2024-05-14 23:04:14.091532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.774 [2024-05-14 23:04:14.151516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.712 23:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.712 23:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:17:02.712 23:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:02.712 23:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:02.712 23:04:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:02.970 23:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.971 23:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:03.228 nvme0n1 00:17:03.228 23:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:03.228 23:04:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:03.486 Running I/O for 2 seconds... 00:17:05.386 00:17:05.386 Latency(us) 00:17:05.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.386 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.386 nvme0n1 : 2.01 21562.10 84.23 0.00 0.00 5926.24 2532.07 9234.62 00:17:05.386 =================================================================================================================== 00:17:05.386 Total : 21562.10 84.23 0.00 0.00 5926.24 2532.07 9234.62 00:17:05.386 0 00:17:05.386 23:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:05.386 23:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:05.386 23:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:05.386 23:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:05.386 23:04:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:05.386 | select(.opcode=="crc32c") 00:17:05.386 | "\(.module_name) \(.executed)"' 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87027 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 87027 ']' 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 87027 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87027 00:17:05.953 killing process with pid 87027 00:17:05.953 Received shutdown signal, test time was about 2.000000 seconds 00:17:05.953 00:17:05.953 Latency(us) 00:17:05.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.953 =================================================================================================================== 00:17:05.953 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87027' 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 87027 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 87027 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87113 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87113 /var/tmp/bperf.sock 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 87113 ']' 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:05.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:05.953 23:04:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:06.211 [2024-05-14 23:04:18.358078] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:06.211 [2024-05-14 23:04:18.358372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:17:06.211 Zero copy mechanism will not be used. 00:17:06.211 =spdk_pid87113 ] 00:17:06.211 [2024-05-14 23:04:18.493405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.211 [2024-05-14 23:04:18.553570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.190 23:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:07.190 23:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:17:07.190 23:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:07.190 23:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:07.190 23:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:07.451 23:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:07.451 23:04:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:07.709 nvme0n1 00:17:07.709 23:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:07.709 23:04:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:07.971 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:07.971 Zero copy mechanism will not be used. 00:17:07.971 Running I/O for 2 seconds... 00:17:09.873 00:17:09.873 Latency(us) 00:17:09.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.873 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:09.873 nvme0n1 : 2.00 5986.87 748.36 0.00 0.00 2665.99 2085.24 10545.34 00:17:09.873 =================================================================================================================== 00:17:09.873 Total : 5986.87 748.36 0.00 0.00 2665.99 2085.24 10545.34 00:17:09.873 0 00:17:09.873 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:09.873 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:09.873 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:09.873 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:09.873 | select(.opcode=="crc32c") 00:17:09.873 | "\(.module_name) \(.executed)"' 00:17:09.873 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87113 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 87113 ']' 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 87113 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87113 00:17:10.131 killing process with pid 87113 00:17:10.131 Received shutdown signal, test time was about 2.000000 seconds 00:17:10.131 00:17:10.131 Latency(us) 00:17:10.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.131 =================================================================================================================== 00:17:10.131 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87113' 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 87113 00:17:10.131 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 87113 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 86825 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 86825 ']' 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 86825 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86825 00:17:10.389 killing process with pid 86825 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86825' 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 86825 00:17:10.389 [2024-05-14 23:04:22.705012] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:10.389 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 86825 00:17:10.647 00:17:10.647 real 0m17.425s 00:17:10.647 user 0m33.633s 00:17:10.647 sys 0m4.183s 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:10.647 ************************************ 00:17:10.647 END TEST nvmf_digest_clean 00:17:10.647 ************************************ 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:10.647 ************************************ 00:17:10.647 START TEST nvmf_digest_error 00:17:10.647 ************************************ 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=87232 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 87232 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87232 ']' 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:10.647 23:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:10.647 [2024-05-14 23:04:22.999485] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:10.647 [2024-05-14 23:04:23.000037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.905 [2024-05-14 23:04:23.135709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.905 [2024-05-14 23:04:23.194168] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.905 [2024-05-14 23:04:23.194220] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.905 [2024-05-14 23:04:23.194231] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.905 [2024-05-14 23:04:23.194239] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.905 [2024-05-14 23:04:23.194246] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.905 [2024-05-14 23:04:23.194277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.905 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:10.905 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:17:10.905 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.905 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.905 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:11.164 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.164 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:11.164 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.164 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:11.164 [2024-05-14 23:04:23.302681] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:11.164 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:11.165 null0 00:17:11.165 [2024-05-14 23:04:23.374201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.165 [2024-05-14 23:04:23.398160] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:11.165 [2024-05-14 23:04:23.398399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87259 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87259 /var/tmp/bperf.sock 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87259 ']' 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:11.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:11.165 23:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:11.165 [2024-05-14 23:04:23.447101] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:11.165 [2024-05-14 23:04:23.447175] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87259 ] 00:17:11.423 [2024-05-14 23:04:23.578538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.423 [2024-05-14 23:04:23.636951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.359 23:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:12.359 23:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:17:12.359 23:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:12.359 23:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:12.618 23:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:12.618 23:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.618 23:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:12.618 23:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.618 23:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:12.618 23:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:12.877 nvme0n1 00:17:12.877 23:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:12.877 23:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.877 23:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:12.877 23:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.877 23:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:12.877 23:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:12.877 Running I/O for 2 seconds... 00:17:12.877 [2024-05-14 23:04:25.253356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:12.877 [2024-05-14 23:04:25.253443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.877 [2024-05-14 23:04:25.253459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.877 [2024-05-14 23:04:25.267588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:12.877 [2024-05-14 23:04:25.267647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.877 [2024-05-14 23:04:25.267662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.282028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.282065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.282079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.297161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.297199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.297213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.312547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.312592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.312606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.325043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.325084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.325098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.340325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.340368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.340382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.354737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.354801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.354816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.369119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.369157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.369171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.382702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.382755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.382769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.398575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.398613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.398627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.412709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.412788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.412804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.425462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.425528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.425544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.439789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.439860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.439884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.453893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.453937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.453951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.467788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.467825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.467839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.482891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.482929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.482942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.497370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.497408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.497422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.511874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.511911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.511926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.137 [2024-05-14 23:04:25.525306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.137 [2024-05-14 23:04:25.525349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.137 [2024-05-14 23:04:25.525362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.396 [2024-05-14 23:04:25.537729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.537781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.537796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.552954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.552993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.553007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.564401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.564438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.564452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.580111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.580154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.580168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.592811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.592848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.592871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.608365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.608403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.608418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.624341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.624385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.624400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.636437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.636475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.636489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.652459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.652523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.652538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.666927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.666988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.667003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.680154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.680201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.680215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.694429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.694475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.694492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.711406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.711447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.711462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.723545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.723582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.723596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.737079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.737116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.737130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.752951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.752989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.753003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.765754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.765803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.765816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.397 [2024-05-14 23:04:25.779191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.397 [2024-05-14 23:04:25.779241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.397 [2024-05-14 23:04:25.779255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.656 [2024-05-14 23:04:25.794232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.656 [2024-05-14 23:04:25.794275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.656 [2024-05-14 23:04:25.794289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.656 [2024-05-14 23:04:25.807807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.656 [2024-05-14 23:04:25.807847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.656 [2024-05-14 23:04:25.807861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.656 [2024-05-14 23:04:25.821782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.656 [2024-05-14 23:04:25.821820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.656 [2024-05-14 23:04:25.821833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.656 [2024-05-14 23:04:25.835605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.656 [2024-05-14 23:04:25.835642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.656 [2024-05-14 23:04:25.835656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.656 [2024-05-14 23:04:25.848801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.656 [2024-05-14 23:04:25.848838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.656 [2024-05-14 23:04:25.848852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.656 [2024-05-14 23:04:25.864600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.656 [2024-05-14 23:04:25.864663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.656 [2024-05-14 23:04:25.864678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:25.877824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:25.877861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:25.877875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:25.892570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:25.892607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:25.892621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:25.906833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:25.906870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:25.906884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:25.919718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:25.919754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:25.919780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:25.934639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:25.934696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:25.934711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:25.949062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:25.949098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:25.949112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:25.963637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:25.963681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:25.963694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:25.977339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:25.977391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:25.977404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:25.992452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:25.992492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:25.992505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:26.005207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:26.005243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:26.005257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:26.021479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:26.021531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:26.021545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.657 [2024-05-14 23:04:26.036719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.657 [2024-05-14 23:04:26.036798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.657 [2024-05-14 23:04:26.036813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.052282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.052319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.052333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.067417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.067480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.067495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.082225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.082276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.082290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.095995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.096031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.096045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.109931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.109966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.109979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.125744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.125834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.125850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.140384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.140422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.140436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.155320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.155356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.155369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.169482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.169537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.169550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.184896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.184932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.184946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.200271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.200322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.200335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.212936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.212976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.212989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.228934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.228998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.229013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.243986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.244043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.244058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.257633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.257684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.257698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.273819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.273854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.273867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.287781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.287829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.287850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.916 [2024-05-14 23:04:26.300291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:13.916 [2024-05-14 23:04:26.300344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.916 [2024-05-14 23:04:26.300358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.175 [2024-05-14 23:04:26.314663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.175 [2024-05-14 23:04:26.314747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.314761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.328529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.328567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.328580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.344750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.344816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.344831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.358606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.358645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.358659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.373699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.373751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.373764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.388593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.388647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.388660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.404435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.404487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.404500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.417954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.418006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.418020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.433363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.433399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.433412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.449419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.449473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.449494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.463945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.463998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.464011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.476472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.476518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.476533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.492256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.492326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.492357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.508257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.508309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.508323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.520197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.520250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.520264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.536528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.536571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.536585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.550774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.550830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.550845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.176 [2024-05-14 23:04:26.565210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.176 [2024-05-14 23:04:26.565251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.176 [2024-05-14 23:04:26.565266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.579963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.580023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.580038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.592784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.592834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.592847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.608178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.608229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.608243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.623543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.623582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.623596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.637684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.637720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.637734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.652423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.652459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.652472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.667231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.667319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.667334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.680514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.680550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.680563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.695463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.695505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.695518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.709483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.709526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.709540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.723897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.435 [2024-05-14 23:04:26.723946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.435 [2024-05-14 23:04:26.723960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.435 [2024-05-14 23:04:26.737307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.436 [2024-05-14 23:04:26.737354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.436 [2024-05-14 23:04:26.737369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.436 [2024-05-14 23:04:26.752719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.436 [2024-05-14 23:04:26.752774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.436 [2024-05-14 23:04:26.752790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.436 [2024-05-14 23:04:26.767410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.436 [2024-05-14 23:04:26.767447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.436 [2024-05-14 23:04:26.767461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.436 [2024-05-14 23:04:26.781757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.436 [2024-05-14 23:04:26.781833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.436 [2024-05-14 23:04:26.781855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.436 [2024-05-14 23:04:26.796291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.436 [2024-05-14 23:04:26.796338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.436 [2024-05-14 23:04:26.796353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.436 [2024-05-14 23:04:26.809002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.436 [2024-05-14 23:04:26.809040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.436 [2024-05-14 23:04:26.809053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.436 [2024-05-14 23:04:26.824752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.436 [2024-05-14 23:04:26.824800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.436 [2024-05-14 23:04:26.824814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.839172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.839212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.839226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.852273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.852312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.852325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.866550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.866592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.866606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.880157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.880196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.880209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.894557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.894607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.894621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.908752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.908817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.908831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.921935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.921980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.921994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.936495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.936552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.936567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.951902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.951947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.951962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.967433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.967479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.967504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.980518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.980558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.980571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:26.994815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:26.994853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:26.994868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:27.009510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:27.009551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:27.009564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:27.023873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:27.023910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:27.023924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:27.038907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:27.038944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:27.038958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:27.052703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:27.052739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:27.052752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:27.068778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:27.068814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:27.068828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.695 [2024-05-14 23:04:27.081946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.695 [2024-05-14 23:04:27.081983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.695 [2024-05-14 23:04:27.081997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.095387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.095431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.095445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.111146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.111195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.111208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.124995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.125031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.125045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.140532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.140585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.140598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.153906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.153943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.153957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.168737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.168800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.168813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.183593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.183630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.183643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.197700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.197739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.197752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.211541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.211578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.211591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.224715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.224751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.224777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 [2024-05-14 23:04:27.239424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x92f9d0) 00:17:14.955 [2024-05-14 23:04:27.239467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.955 [2024-05-14 23:04:27.239481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.955 00:17:14.955 Latency(us) 00:17:14.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.955 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:14.955 nvme0n1 : 2.01 17674.85 69.04 0.00 0.00 7230.74 3693.85 18588.39 00:17:14.955 =================================================================================================================== 00:17:14.955 Total : 17674.85 69.04 0.00 0.00 7230.74 3693.85 18588.39 00:17:14.955 0 00:17:14.955 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:14.955 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:14.955 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:14.955 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:14.955 | .driver_specific 00:17:14.955 | .nvme_error 00:17:14.955 | .status_code 00:17:14.955 | .command_transient_transport_error' 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87259 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87259 ']' 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87259 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87259 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:15.214 killing process with pid 87259 00:17:15.214 Received shutdown signal, test time was about 2.000000 seconds 00:17:15.214 00:17:15.214 Latency(us) 00:17:15.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.214 =================================================================================================================== 00:17:15.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87259' 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87259 00:17:15.214 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87259 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87355 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87355 /var/tmp/bperf.sock 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87355 ']' 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:15.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:15.473 23:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:15.473 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:15.473 Zero copy mechanism will not be used. 00:17:15.473 [2024-05-14 23:04:27.793842] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:15.473 [2024-05-14 23:04:27.793939] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87355 ] 00:17:15.731 [2024-05-14 23:04:27.926976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.731 [2024-05-14 23:04:27.986610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.731 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:15.731 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:17:15.731 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:15.731 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:15.989 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:15.989 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.989 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:15.989 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.989 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:15.989 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:16.558 nvme0n1 00:17:16.558 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:16.558 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.558 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:16.558 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.558 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:16.558 23:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:16.558 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:16.558 Zero copy mechanism will not be used. 00:17:16.558 Running I/O for 2 seconds... 00:17:16.558 [2024-05-14 23:04:28.813030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.813080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.813095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.818418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.818457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.818471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.823378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.823416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.823430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.827479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.827516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.827529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.830545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.830581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.830598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.834981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.835017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.835030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.839711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.839748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.839777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.845073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.845112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.845126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.848375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.848413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.848427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.852936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.852976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.852990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.856889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.856928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.856943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.860634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.860671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.860684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.865374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.865414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.865428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.869997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.870035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.870049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.873617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.873652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.873665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.878141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.878178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.878191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.883198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.883236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.883250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.887881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.887916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.887929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.891155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.891208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.891230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.897597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.897652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.897677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.904377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.904427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.904448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.911218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.911269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.911290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.917713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.917778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.558 [2024-05-14 23:04:28.917801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.558 [2024-05-14 23:04:28.924213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.558 [2024-05-14 23:04:28.924266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.559 [2024-05-14 23:04:28.924286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.559 [2024-05-14 23:04:28.931005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.559 [2024-05-14 23:04:28.931056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.559 [2024-05-14 23:04:28.931076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.559 [2024-05-14 23:04:28.937513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.559 [2024-05-14 23:04:28.937563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.559 [2024-05-14 23:04:28.937582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.559 [2024-05-14 23:04:28.941861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.559 [2024-05-14 23:04:28.941907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.559 [2024-05-14 23:04:28.941927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.559 [2024-05-14 23:04:28.947749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.559 [2024-05-14 23:04:28.947811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.559 [2024-05-14 23:04:28.947833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.819 [2024-05-14 23:04:28.953887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.819 [2024-05-14 23:04:28.953926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.819 [2024-05-14 23:04:28.953940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.819 [2024-05-14 23:04:28.958780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.819 [2024-05-14 23:04:28.958819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.819 [2024-05-14 23:04:28.958833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.819 [2024-05-14 23:04:28.964094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.819 [2024-05-14 23:04:28.964133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.819 [2024-05-14 23:04:28.964146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.819 [2024-05-14 23:04:28.969434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.819 [2024-05-14 23:04:28.969472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:28.969486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:28.973137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:28.973172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:28.973186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:28.977379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:28.977416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:28.977429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:28.982406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:28.982444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:28.982457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:28.986717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:28.986752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:28.986781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:28.990439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:28.990473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:28.990487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:28.993581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:28.993617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:28.993630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:28.998190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:28.998227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:28.998241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.001888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.001923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.001936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.006111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.006149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.006163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.010064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.010106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.010120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.014189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.014227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.014241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.018391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.018427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.018440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.022039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.022076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.022089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.026077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.026115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.026129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.030607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.030643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.030657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.034839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.034884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.034897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.038721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.038757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.038785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.042514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.042557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.042570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.046874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.046910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.046924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.052150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.052189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.052202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.056783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.056816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.056830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.059503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.059536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.059549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.064540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.064578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.064591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.069291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.069327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.069341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.072784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.072818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.072831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.077176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.077213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.077226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.082106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.820 [2024-05-14 23:04:29.082142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.820 [2024-05-14 23:04:29.082155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.820 [2024-05-14 23:04:29.086834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.086872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.086885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.090221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.090257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.090270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.094788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.094825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.094838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.101011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.101067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.101082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.105881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.105920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.105934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.109610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.109648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.109661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.114346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.114388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.114402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.119695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.119734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.119747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.125000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.125037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.125051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.128614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.128650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.128663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.133139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.133181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.133195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.137678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.137715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.137729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.141261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.141297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.141310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.145489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.145525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.145538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.149566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.149601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.149615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.153615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.153650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.153663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.157645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.157683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.157696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.162191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.162227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.162241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.166333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.166368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.166381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.170756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.170804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.170817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.174821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.174856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.174869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.178161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.178197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.178210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.182636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.182673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.182686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.186407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.186443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.186456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.190928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.190964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.190978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.194583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.194618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.194631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.198876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.198912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.198925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.203379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.821 [2024-05-14 23:04:29.203416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.821 [2024-05-14 23:04:29.203429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.821 [2024-05-14 23:04:29.207131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:16.822 [2024-05-14 23:04:29.207167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.822 [2024-05-14 23:04:29.207180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.082 [2024-05-14 23:04:29.211776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.082 [2024-05-14 23:04:29.211811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.082 [2024-05-14 23:04:29.211824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.082 [2024-05-14 23:04:29.216544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.082 [2024-05-14 23:04:29.216580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.082 [2024-05-14 23:04:29.216594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.082 [2024-05-14 23:04:29.219670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.082 [2024-05-14 23:04:29.219705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.082 [2024-05-14 23:04:29.219718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.082 [2024-05-14 23:04:29.223886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.082 [2024-05-14 23:04:29.223924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.082 [2024-05-14 23:04:29.223938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.082 [2024-05-14 23:04:29.228674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.082 [2024-05-14 23:04:29.228711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.082 [2024-05-14 23:04:29.228724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.232114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.232150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.232164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.236568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.236604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.236617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.240108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.240144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.240157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.244235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.244271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.244284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.247840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.247876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.247889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.252272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.252310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.252324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.255650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.255685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.255698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.260491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.260528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.260542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.264685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.264720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.264734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.268158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.268194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.268207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.272628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.272665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.272678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.277383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.277433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.277446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.280970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.281006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.281019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.284793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.284829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.284843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.288510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.288547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.288560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.293413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.293450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.293463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.297386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.297425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.297438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.300891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.300928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.300941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.305454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.305492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.305506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.309479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.309515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.309528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.313058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.313093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.313107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.317460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.317497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.317510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.321485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.321521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.321535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.325948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.325986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.325999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.329966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.330001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.330014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.333823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.333858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.333871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.337954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.337990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.083 [2024-05-14 23:04:29.338003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.083 [2024-05-14 23:04:29.341554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.083 [2024-05-14 23:04:29.341590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.341603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.345904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.345940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.345953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.350071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.350107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.350120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.353754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.353800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.353813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.357737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.357785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.357799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.362379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.362415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.362428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.367300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.367335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.367349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.370115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.370149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.370162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.375397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.375434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.375448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.378748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.378793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.378807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.383251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.383288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.383301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.387658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.387694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.387707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.392089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.392130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.392144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.395813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.395850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.395863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.400592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.400634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.400648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.404629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.404666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.404679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.409231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.409283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.409297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.413473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.413510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.413524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.417370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.417406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.417419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.421361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.421398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.421411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.425072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.425108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.425121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.427980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.428015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.428028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.432437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.432474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.432487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.437642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.437678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.437692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.442776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.442810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.442823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.446043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.446077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.446091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.450185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.450221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.450234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.454559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.454601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.084 [2024-05-14 23:04:29.454614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.084 [2024-05-14 23:04:29.458654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.084 [2024-05-14 23:04:29.458689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.085 [2024-05-14 23:04:29.458702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.085 [2024-05-14 23:04:29.462600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.085 [2024-05-14 23:04:29.462635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.085 [2024-05-14 23:04:29.462649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.085 [2024-05-14 23:04:29.466196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.085 [2024-05-14 23:04:29.466231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.085 [2024-05-14 23:04:29.466244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.085 [2024-05-14 23:04:29.470577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.085 [2024-05-14 23:04:29.470613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.085 [2024-05-14 23:04:29.470625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.474901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.474937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.474950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.479217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.479253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.479266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.482370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.482404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.482417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.487618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.487655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.487668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.491850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.491884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.491897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.495426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.495461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.495474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.498891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.498926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.498939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.502318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.502353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.502366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.507383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.507421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.507435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.510688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.510724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.510737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.515418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.515455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.345 [2024-05-14 23:04:29.515468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.345 [2024-05-14 23:04:29.520098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.345 [2024-05-14 23:04:29.520134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.520148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.523648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.523683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.523696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.528267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.528303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.528316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.532164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.532199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.532212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.535889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.535925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.535939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.540355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.540391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.540404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.544867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.544911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.544924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.549319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.549355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.549368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.552170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.552203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.552216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.557482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.557519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.557533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.562403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.562438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.562451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.566106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.566142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.566155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.570591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.570627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.570640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.575452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.575488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.575502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.579411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.579447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.579460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.582372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.582406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.582419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.586720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.586774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.586790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.591416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.591452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.591466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.596183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.596219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.596232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.599534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.599569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.599582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.603636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.603672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.603686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.608082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.608118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.608132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.612072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.612109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.612123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.616381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.616417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.616430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.619682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.619716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.619729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.624556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.624591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.624604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.629603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.629640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.629653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.346 [2024-05-14 23:04:29.633837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.346 [2024-05-14 23:04:29.633872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.346 [2024-05-14 23:04:29.633884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.637576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.637610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.637624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.641843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.641878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.641891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.645408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.645446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.645459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.649873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.649909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.649923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.654346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.654382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.654395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.657649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.657685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.657698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.662653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.662689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.662703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.667262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.667313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.667327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.670837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.670871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.670884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.675655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.675691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.675704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.679757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.679802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.679826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.683394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.683445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.683458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.687599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.687636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.687649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.691837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.691872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.691885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.695648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.695684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.695697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.700311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.700378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.700391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.703802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.703836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.703849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.708285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.708319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.708333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.712127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.712194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.712207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.716069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.716113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.716126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.720588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.720623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.720636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.724238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.724289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.724302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.728294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.728343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.728373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.347 [2024-05-14 23:04:29.733216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.347 [2024-05-14 23:04:29.733252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.347 [2024-05-14 23:04:29.733266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.737674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.737709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.737722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.741480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.741516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.741530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.746156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.746206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.746221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.749918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.749954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.749968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.753900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.753935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.753948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.758053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.758089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.758102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.761591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.761627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.761640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.766563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.766600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.766613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.771688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.771724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.771737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.776780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.776814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.776828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.608 [2024-05-14 23:04:29.779994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.608 [2024-05-14 23:04:29.780043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.608 [2024-05-14 23:04:29.780056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.784328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.784364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.784377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.789140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.789176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.789189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.792993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.793029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.793042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.796750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.796800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.796814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.800581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.800617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.800630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.804544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.804580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.804593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.808511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.808547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.808560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.812722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.812774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.812789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.817044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.817083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.817097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.821079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.821118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.821131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.825703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.825739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.825753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.829000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.829037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.829050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.832835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.832869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.832893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.836980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.837015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.837029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.840550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.840594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.840608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.844748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.844800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.844833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.849458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.849495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.849508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.852811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.852848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.852873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.857943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.857979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.857992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.862859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.862895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.862909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.866007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.866041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.866054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.870300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.870336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.870349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.874213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.874248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.874261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.878033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.878070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.878083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.882409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.882445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.882458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.886848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.886884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.886897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.609 [2024-05-14 23:04:29.891640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.609 [2024-05-14 23:04:29.891676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.609 [2024-05-14 23:04:29.891690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.894729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.894777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.894793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.898677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.898712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.898725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.903904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.903940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.903953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.907021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.907055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.907068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.911182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.911218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.911232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.915895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.915930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.915943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.919344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.919381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.919394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.922899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.922934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.922947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.927443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.927480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.927493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.931608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.931643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.931657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.935647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.935681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.935694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.939927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.939962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.939975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.943706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.943741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.943754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.947411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.947448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.947461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.951898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.951933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.951946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.955436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.955471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.955484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.958605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.958640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.958654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.962757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.962805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.962818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.967222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.967258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.967272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.970594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.970629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.970642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.975038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.975075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.975088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.980228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.980265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.980278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.985051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.985086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.985100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.987961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.987994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.988006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.992719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.992755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.992784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.610 [2024-05-14 23:04:29.997529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.610 [2024-05-14 23:04:29.997565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.610 [2024-05-14 23:04:29.997578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.870 [2024-05-14 23:04:30.000296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.870 [2024-05-14 23:04:30.000330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.870 [2024-05-14 23:04:30.000342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.870 [2024-05-14 23:04:30.005487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.870 [2024-05-14 23:04:30.005528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.870 [2024-05-14 23:04:30.005542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.870 [2024-05-14 23:04:30.008756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.870 [2024-05-14 23:04:30.008801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.870 [2024-05-14 23:04:30.008814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.870 [2024-05-14 23:04:30.012925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.870 [2024-05-14 23:04:30.012972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.870 [2024-05-14 23:04:30.012986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.870 [2024-05-14 23:04:30.017035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.870 [2024-05-14 23:04:30.017072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.017085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.021259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.021295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.021308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.025484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.025520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.025535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.028960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.028995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.029009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.033148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.033184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.033197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.037432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.037468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.037481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.041012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.041049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.041062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.045796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.045833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.045846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.049191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.049226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.049239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.053480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.053518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.053531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.058592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.058629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.058642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.063774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.063807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.063820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.067392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.067426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.067439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.071712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.071749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.071774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.076260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.076297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.076310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.079605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.079640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.079653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.083657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.083694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.083707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.088696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.088733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.088746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.091974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.092009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.092021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.096500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.096536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.096549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.100637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.100673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.100686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.104949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.104984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.104998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.109290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.109325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.109339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.113466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.113503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.113515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.117394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.117431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.117444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.121782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.121816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.121830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.125906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.125941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.125955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.129799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.871 [2024-05-14 23:04:30.129834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.871 [2024-05-14 23:04:30.129847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.871 [2024-05-14 23:04:30.133301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.133338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.133352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.137242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.137277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.137290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.140812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.140847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.140860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.144664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.144701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.144714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.148587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.148622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.148636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.152993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.153030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.153043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.156936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.156974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.156987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.160863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.160908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.160921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.165068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.165103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.165116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.169149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.169185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.169199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.173532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.173567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.173581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.177303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.177339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.177353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.181799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.181833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.181847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.185627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.185663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.185676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.189245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.189280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.189294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.193622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.193657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.193670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.198263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.198298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.198311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.201948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.201983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.201996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.205592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.205627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.205640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.209902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.209937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.209950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.213105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.213140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.213153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.217161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.217196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.217210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.222368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.222404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.222418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.225545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.225580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.225593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.229836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.229871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.229884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.234567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.234605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.234619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.238268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.238303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.872 [2024-05-14 23:04:30.238317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.872 [2024-05-14 23:04:30.242400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.872 [2024-05-14 23:04:30.242436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.873 [2024-05-14 23:04:30.242449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:17.873 [2024-05-14 23:04:30.246673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.873 [2024-05-14 23:04:30.246715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.873 [2024-05-14 23:04:30.246729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:17.873 [2024-05-14 23:04:30.250632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.873 [2024-05-14 23:04:30.250668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.873 [2024-05-14 23:04:30.250682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.873 [2024-05-14 23:04:30.254594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.873 [2024-05-14 23:04:30.254629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.873 [2024-05-14 23:04:30.254642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:17.873 [2024-05-14 23:04:30.259177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:17.873 [2024-05-14 23:04:30.259228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.873 [2024-05-14 23:04:30.259241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.262939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.262974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.262988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.267024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.267059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.267072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.270831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.270866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.270879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.275540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.275577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.275590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.280594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.280637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.280652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.284964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.285002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.285016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.289227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.289264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.289279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.293550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.293587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.293600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.297047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.297082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.297096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.301242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.301279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.301293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.305307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.305344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.305357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.308976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.309017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.309031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.313559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.313596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.313610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.317721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.317758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.317785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.321432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.321468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.321483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.325930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.325966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.325979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.329909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.329945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.329959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.333603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.333639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.333651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.337545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.337582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.337595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.341841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.341877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.341890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.345842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.345877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.345890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.349328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.349364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.349377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.353409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.353445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.353459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.356948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.356983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.133 [2024-05-14 23:04:30.356996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.133 [2024-05-14 23:04:30.360280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.133 [2024-05-14 23:04:30.360316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.360329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.364611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.364648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.364661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.368738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.368786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.368800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.373130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.373167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.373180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.377106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.377142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.377155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.381228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.381264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.381278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.385702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.385737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.385751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.389876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.389912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.389924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.393677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.393713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.393727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.397876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.397915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.397929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.402073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.402109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.402122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.405631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.405669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.405683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.410318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.410358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.410374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.414359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.414398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.414412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.418790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.418844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.418865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.422711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.422749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.422778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.427554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.427592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.427606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.432239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.432277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.432290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.435297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.435340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.435358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.439460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.439497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.439511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.443958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.443994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.444007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.448128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.448163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.448176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.451404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.451442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.451455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.455813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.455849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.455862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.460018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.460054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.460068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.463598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.463634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.463647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.468241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.134 [2024-05-14 23:04:30.468277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.134 [2024-05-14 23:04:30.468291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.134 [2024-05-14 23:04:30.472515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.472552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.472565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.476372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.476408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.476421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.480324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.480362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.480375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.483874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.483941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.483954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.488297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.488348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.488361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.492347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.492384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.492397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.496599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.496635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.496649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.500076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.500111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.500124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.504652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.504689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.504702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.509392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.509428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.509442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.514607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.514643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.514657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.518125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.518192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.518205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.135 [2024-05-14 23:04:30.521979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.135 [2024-05-14 23:04:30.522015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.135 [2024-05-14 23:04:30.522029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.526396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.526432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.526446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.530986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.531023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.531037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.533829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.533867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.533880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.538379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.538416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.538429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.542196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.542233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.542246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.546748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.546798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.546813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.551512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.551548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.551562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.556257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.556294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.556308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.561044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.561079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.561092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.564256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.564290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.564303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.568642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.568678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.568691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.573613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.573649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.573663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.578278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.395 [2024-05-14 23:04:30.578313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.395 [2024-05-14 23:04:30.578327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.395 [2024-05-14 23:04:30.583182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.583217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.583230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.586306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.586340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.586353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.590503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.590539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.590553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.594399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.594434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.594448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.598745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.598793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.598807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.602003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.602038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.602052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.607219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.607257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.607270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.612435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.612471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.612486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.616054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.616090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.616103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.620451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.620488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.620503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.624196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.624233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.624246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.628718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.628754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.628781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.632035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.632070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.632083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.636315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.636352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.636365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.640514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.640545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.640559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.644319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.644353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.644367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.648742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.648787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.648802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.652871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.652914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.652927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.657221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.657257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.657270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.661376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.661411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.661424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.665936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.665971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.665985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.669101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.669136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.669150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.673120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.673156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.673169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.677838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.677874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.677887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.681817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.681853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.396 [2024-05-14 23:04:30.681867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.396 [2024-05-14 23:04:30.685370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.396 [2024-05-14 23:04:30.685408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.685422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.689851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.689888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.689901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.694518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.694554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.694569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.698266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.698300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.698313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.701701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.701736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.701750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.705687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.705722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.705737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.710185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.710220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.710234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.713811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.713848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.713862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.717875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.717913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.717927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.721876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.721914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.721928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.725226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.725261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.725275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.729935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.729970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.729983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.734706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.734743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.734756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.739790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.739837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.739851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.742702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.742735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.742748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.747908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.747944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.747959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.753150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.753186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.753199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.756392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.756427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.756450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.760900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.760939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.760952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.765825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.765861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.765874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.770247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.770282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.770295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.774636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.774671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.774684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.777739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.777784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.777798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.397 [2024-05-14 23:04:30.782107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.397 [2024-05-14 23:04:30.782144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.397 [2024-05-14 23:04:30.782163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.656 [2024-05-14 23:04:30.786347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.656 [2024-05-14 23:04:30.786386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.656 [2024-05-14 23:04:30.786402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.656 [2024-05-14 23:04:30.789800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.656 [2024-05-14 23:04:30.789835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.656 [2024-05-14 23:04:30.789849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.656 [2024-05-14 23:04:30.794187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.656 [2024-05-14 23:04:30.794224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.656 [2024-05-14 23:04:30.794239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:18.656 [2024-05-14 23:04:30.797984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.656 [2024-05-14 23:04:30.798028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.656 [2024-05-14 23:04:30.798041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.656 [2024-05-14 23:04:30.801990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.656 [2024-05-14 23:04:30.802026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.656 [2024-05-14 23:04:30.802039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:18.656 [2024-05-14 23:04:30.805724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6c8b0) 00:17:18.656 [2024-05-14 23:04:30.805773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.656 [2024-05-14 23:04:30.805789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:18.656 00:17:18.656 Latency(us) 00:17:18.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.656 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:18.656 nvme0n1 : 2.00 7370.36 921.29 0.00 0.00 2166.31 670.25 7089.80 00:17:18.656 =================================================================================================================== 00:17:18.656 Total : 7370.36 921.29 0.00 0.00 2166.31 670.25 7089.80 00:17:18.656 0 00:17:18.656 23:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:18.656 23:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:18.656 23:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:18.656 23:04:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:18.656 | .driver_specific 00:17:18.656 | .nvme_error 00:17:18.656 | .status_code 00:17:18.656 | .command_transient_transport_error' 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 475 > 0 )) 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87355 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87355 ']' 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87355 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87355 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:18.915 killing process with pid 87355 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87355' 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87355 00:17:18.915 Received shutdown signal, test time was about 2.000000 seconds 00:17:18.915 00:17:18.915 Latency(us) 00:17:18.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.915 =================================================================================================================== 00:17:18.915 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.915 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87355 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87425 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87425 /var/tmp/bperf.sock 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87425 ']' 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:19.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:19.173 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:19.173 [2024-05-14 23:04:31.384236] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:19.173 [2024-05-14 23:04:31.384329] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87425 ] 00:17:19.173 [2024-05-14 23:04:31.522171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.432 [2024-05-14 23:04:31.580916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.432 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:19.432 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:17:19.432 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:19.432 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:19.696 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:19.696 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.696 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:19.696 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.696 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:19.696 23:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:19.954 nvme0n1 00:17:19.954 23:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:19.954 23:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.954 23:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:19.954 23:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.954 23:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:19.954 23:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:20.212 Running I/O for 2 seconds... 00:17:20.212 [2024-05-14 23:04:32.404934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f6458 00:17:20.212 [2024-05-14 23:04:32.406035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.406079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.418043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ef6a8 00:17:20.212 [2024-05-14 23:04:32.419284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.419321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.429337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e0ea0 00:17:20.212 [2024-05-14 23:04:32.430415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.430449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.443543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f5be8 00:17:20.212 [2024-05-14 23:04:32.445493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.445527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.452118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e4578 00:17:20.212 [2024-05-14 23:04:32.453075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.453105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.466520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ed4e8 00:17:20.212 [2024-05-14 23:04:32.468150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.468197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.477746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190de470 00:17:20.212 [2024-05-14 23:04:32.479166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.479218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.489515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fe2e8 00:17:20.212 [2024-05-14 23:04:32.490880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.490911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.501651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f3e60 00:17:20.212 [2024-05-14 23:04:32.502482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.502515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.513086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fa3a0 00:17:20.212 [2024-05-14 23:04:32.513815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.513843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.526747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f8a50 00:17:20.212 [2024-05-14 23:04:32.528257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.528292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.538011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f81e0 00:17:20.212 [2024-05-14 23:04:32.539343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.539379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.549299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eee38 00:17:20.212 [2024-05-14 23:04:32.550493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.550527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.562420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eee38 00:17:20.212 [2024-05-14 23:04:32.564092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.564124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.573700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e5220 00:17:20.212 [2024-05-14 23:04:32.575215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.575247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.585343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ef6a8 00:17:20.212 [2024-05-14 23:04:32.586866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.586899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:20.212 [2024-05-14 23:04:32.596565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f5be8 00:17:20.212 [2024-05-14 23:04:32.597856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.212 [2024-05-14 23:04:32.597888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:20.471 [2024-05-14 23:04:32.608339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eee38 00:17:20.471 [2024-05-14 23:04:32.609561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.471 [2024-05-14 23:04:32.609596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:20.471 [2024-05-14 23:04:32.623048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f2d80 00:17:20.471 [2024-05-14 23:04:32.624965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.471 [2024-05-14 23:04:32.625000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:20.471 [2024-05-14 23:04:32.631630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f96f8 00:17:20.471 [2024-05-14 23:04:32.632562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.471 [2024-05-14 23:04:32.632596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:20.471 [2024-05-14 23:04:32.643713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f3a28 00:17:20.471 [2024-05-14 23:04:32.644629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.471 [2024-05-14 23:04:32.644663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:20.471 [2024-05-14 23:04:32.655117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eee38 00:17:20.471 [2024-05-14 23:04:32.655912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.471 [2024-05-14 23:04:32.655942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:20.471 [2024-05-14 23:04:32.669106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e5658 00:17:20.471 [2024-05-14 23:04:32.670539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.471 [2024-05-14 23:04:32.670572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:20.471 [2024-05-14 23:04:32.681458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190de038 00:17:20.471 [2024-05-14 23:04:32.682864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.471 [2024-05-14 23:04:32.682894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:20.471 [2024-05-14 23:04:32.695007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e12d8 00:17:20.472 [2024-05-14 23:04:32.696914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.696945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.703409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e1710 00:17:20.472 [2024-05-14 23:04:32.704193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.704224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.717549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190de038 00:17:20.472 [2024-05-14 23:04:32.719130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.719161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.728622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ef270 00:17:20.472 [2024-05-14 23:04:32.730127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.730173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.740377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f1430 00:17:20.472 [2024-05-14 23:04:32.741686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.741718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.752462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f9b30 00:17:20.472 [2024-05-14 23:04:32.753773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.753801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.763929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ea248 00:17:20.472 [2024-05-14 23:04:32.765346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.765377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.778505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f4f40 00:17:20.472 [2024-05-14 23:04:32.780319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.780352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.791088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190dece0 00:17:20.472 [2024-05-14 23:04:32.793031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.793064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.799679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fcdd0 00:17:20.472 [2024-05-14 23:04:32.800648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.800683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.814084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fb8b8 00:17:20.472 [2024-05-14 23:04:32.815720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.815753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.825229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190dfdc0 00:17:20.472 [2024-05-14 23:04:32.826611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.826644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.836835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f4b08 00:17:20.472 [2024-05-14 23:04:32.838186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.838217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.848038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190feb58 00:17:20.472 [2024-05-14 23:04:32.849163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.849196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:20.472 [2024-05-14 23:04:32.859818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e6fa8 00:17:20.472 [2024-05-14 23:04:32.860884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.472 [2024-05-14 23:04:32.860930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.874289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f20d8 00:17:20.731 [2024-05-14 23:04:32.876016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.876049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.882916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e0630 00:17:20.731 [2024-05-14 23:04:32.883663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.883694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.897300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fb048 00:17:20.731 [2024-05-14 23:04:32.898727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.898777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.908453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e84c0 00:17:20.731 [2024-05-14 23:04:32.909740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.909789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.920298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f7100 00:17:20.731 [2024-05-14 23:04:32.921448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.921482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.932406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fb480 00:17:20.731 [2024-05-14 23:04:32.933089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.933119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.947383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190df118 00:17:20.731 [2024-05-14 23:04:32.949405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.949447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.956111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ebb98 00:17:20.731 [2024-05-14 23:04:32.957126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.957159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.970635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f1ca0 00:17:20.731 [2024-05-14 23:04:32.972190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.972238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.980386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ed920 00:17:20.731 [2024-05-14 23:04:32.981243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.981275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:32.995450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f1430 00:17:20.731 [2024-05-14 23:04:32.997296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:32.997358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:33.007310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f96f8 00:17:20.731 [2024-05-14 23:04:33.009140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:33.009171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:33.015886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e01f8 00:17:20.731 [2024-05-14 23:04:33.016727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:33.016753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:33.030471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190de038 00:17:20.731 [2024-05-14 23:04:33.031881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:33.031915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:33.041869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fc998 00:17:20.731 [2024-05-14 23:04:33.043066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:33.043115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:33.054410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eaef0 00:17:20.731 [2024-05-14 23:04:33.055516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.731 [2024-05-14 23:04:33.055553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:20.731 [2024-05-14 23:04:33.066488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fe720 00:17:20.731 [2024-05-14 23:04:33.067226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.732 [2024-05-14 23:04:33.067260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:20.732 [2024-05-14 23:04:33.078550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f35f0 00:17:20.732 [2024-05-14 23:04:33.079583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.732 [2024-05-14 23:04:33.079615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:20.732 [2024-05-14 23:04:33.089853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e5a90 00:17:20.732 [2024-05-14 23:04:33.090720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.732 [2024-05-14 23:04:33.090750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:20.732 [2024-05-14 23:04:33.104083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ff3c8 00:17:20.732 [2024-05-14 23:04:33.105790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.732 [2024-05-14 23:04:33.105822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:20.732 [2024-05-14 23:04:33.112579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f57b0 00:17:20.732 [2024-05-14 23:04:33.113329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.732 [2024-05-14 23:04:33.113352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:20.990 [2024-05-14 23:04:33.125062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e3d08 00:17:20.990 [2024-05-14 23:04:33.125960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.990 [2024-05-14 23:04:33.125994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:20.990 [2024-05-14 23:04:33.139355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f81e0 00:17:20.990 [2024-05-14 23:04:33.140924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.990 [2024-05-14 23:04:33.140956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:20.990 [2024-05-14 23:04:33.150487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ee190 00:17:20.990 [2024-05-14 23:04:33.151818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.990 [2024-05-14 23:04:33.151848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:20.990 [2024-05-14 23:04:33.162149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eaef0 00:17:20.990 [2024-05-14 23:04:33.163426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.990 [2024-05-14 23:04:33.163458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:20.990 [2024-05-14 23:04:33.174253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f5378 00:17:20.990 [2024-05-14 23:04:33.175523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.990 [2024-05-14 23:04:33.175555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:20.990 [2024-05-14 23:04:33.188276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ea680 00:17:20.990 [2024-05-14 23:04:33.190211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.990 [2024-05-14 23:04:33.190244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:20.990 [2024-05-14 23:04:33.196818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e73e0 00:17:20.990 [2024-05-14 23:04:33.197783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.990 [2024-05-14 23:04:33.197814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:20.990 [2024-05-14 23:04:33.208986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f92c0 00:17:20.991 [2024-05-14 23:04:33.209956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.209988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.220324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ee190 00:17:20.991 [2024-05-14 23:04:33.221176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.221208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.234275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fd640 00:17:20.991 [2024-05-14 23:04:33.235552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.235582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.245572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fb8b8 00:17:20.991 [2024-05-14 23:04:33.246696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.246727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.259894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e5220 00:17:20.991 [2024-05-14 23:04:33.261859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.261892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.268423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e38d0 00:17:20.991 [2024-05-14 23:04:33.269431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.269472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.280586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ef270 00:17:20.991 [2024-05-14 23:04:33.281581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.281612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.294061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e23b8 00:17:20.991 [2024-05-14 23:04:33.295538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.295569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.305160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e49b0 00:17:20.991 [2024-05-14 23:04:33.306392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.306425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.316776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f7970 00:17:20.991 [2024-05-14 23:04:33.317981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.318012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.328851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190df118 00:17:20.991 [2024-05-14 23:04:33.330043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.330073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.340251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e23b8 00:17:20.991 [2024-05-14 23:04:33.341294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.341325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.354861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ee5c8 00:17:20.991 [2024-05-14 23:04:33.356687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.356719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.363343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e0ea0 00:17:20.991 [2024-05-14 23:04:33.364240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.364271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:20.991 [2024-05-14 23:04:33.375442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e5220 00:17:20.991 [2024-05-14 23:04:33.376322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:20.991 [2024-05-14 23:04:33.376353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.386811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f5378 00:17:21.249 [2024-05-14 23:04:33.387535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.249 [2024-05-14 23:04:33.387567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.401340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e23b8 00:17:21.249 [2024-05-14 23:04:33.402880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.249 [2024-05-14 23:04:33.402910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.412402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f1430 00:17:21.249 [2024-05-14 23:04:33.413717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.249 [2024-05-14 23:04:33.413750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.423990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e5a90 00:17:21.249 [2024-05-14 23:04:33.425073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.249 [2024-05-14 23:04:33.425104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.435326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e9e10 00:17:21.249 [2024-05-14 23:04:33.436237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.249 [2024-05-14 23:04:33.436270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.447211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190de470 00:17:21.249 [2024-05-14 23:04:33.447801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.249 [2024-05-14 23:04:33.447828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.459590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f4f40 00:17:21.249 [2024-05-14 23:04:33.460362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.249 [2024-05-14 23:04:33.460394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.471014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fef90 00:17:21.249 [2024-05-14 23:04:33.471661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.249 [2024-05-14 23:04:33.471691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.484791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e7c50 00:17:21.249 [2024-05-14 23:04:33.486218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.249 [2024-05-14 23:04:33.486250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.494314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f8a50 00:17:21.249 [2024-05-14 23:04:33.495077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.249 [2024-05-14 23:04:33.495109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:21.249 [2024-05-14 23:04:33.507758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f96f8 00:17:21.249 [2024-05-14 23:04:33.508716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.508757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.519897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f8a50 00:17:21.250 [2024-05-14 23:04:33.521166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.521201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.531184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f4f40 00:17:21.250 [2024-05-14 23:04:33.532276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.532309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.545385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f6458 00:17:21.250 [2024-05-14 23:04:33.547307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.547340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.553928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e3d08 00:17:21.250 [2024-05-14 23:04:33.554883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.554918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.568260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e6738 00:17:21.250 [2024-05-14 23:04:33.569893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.569925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.579379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fa7d8 00:17:21.250 [2024-05-14 23:04:33.580780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.580818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.591018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fbcf0 00:17:21.250 [2024-05-14 23:04:33.592371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.592405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.603159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e5a90 00:17:21.250 [2024-05-14 23:04:33.604494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.604528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.616618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190df988 00:17:21.250 [2024-05-14 23:04:33.618474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.618506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.625175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190df988 00:17:21.250 [2024-05-14 23:04:33.626050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.626082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:21.250 [2024-05-14 23:04:33.639460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e9e10 00:17:21.250 [2024-05-14 23:04:33.641033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.250 [2024-05-14 23:04:33.641064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:21.508 [2024-05-14 23:04:33.651583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fb8b8 00:17:21.508 [2024-05-14 23:04:33.653133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.653164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.663066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ec408 00:17:21.509 [2024-05-14 23:04:33.664479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.664510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.674699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eea00 00:17:21.509 [2024-05-14 23:04:33.675584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.675616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.686233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190feb58 00:17:21.509 [2024-05-14 23:04:33.687043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.687073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.697967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f1868 00:17:21.509 [2024-05-14 23:04:33.698562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.698589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.711712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190dfdc0 00:17:21.509 [2024-05-14 23:04:33.713103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.713135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.724085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e73e0 00:17:21.509 [2024-05-14 23:04:33.725801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.725834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.732616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e6300 00:17:21.509 [2024-05-14 23:04:33.733374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.733406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.744678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ea680 00:17:21.509 [2024-05-14 23:04:33.745442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.745474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.758821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f81e0 00:17:21.509 [2024-05-14 23:04:33.760048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.760081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.770233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e23b8 00:17:21.509 [2024-05-14 23:04:33.771309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.771340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.784598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e2c28 00:17:21.509 [2024-05-14 23:04:33.786686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.786716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.793499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e3498 00:17:21.509 [2024-05-14 23:04:33.794447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.794477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.808124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e9e10 00:17:21.509 [2024-05-14 23:04:33.809591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.809623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.819155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ec408 00:17:21.509 [2024-05-14 23:04:33.820525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.820557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.831011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ea248 00:17:21.509 [2024-05-14 23:04:33.832312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.832344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.845373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e0ea0 00:17:21.509 [2024-05-14 23:04:33.847357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.847388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.853915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ef270 00:17:21.509 [2024-05-14 23:04:33.854716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.854748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.868949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e3498 00:17:21.509 [2024-05-14 23:04:33.870772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.870822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.877902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190efae0 00:17:21.509 [2024-05-14 23:04:33.878876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.878907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:21.509 [2024-05-14 23:04:33.892181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e1b48 00:17:21.509 [2024-05-14 23:04:33.893859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.509 [2024-05-14 23:04:33.893889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:21.768 [2024-05-14 23:04:33.903350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fd640 00:17:21.768 [2024-05-14 23:04:33.904746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.768 [2024-05-14 23:04:33.904790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:21.768 [2024-05-14 23:04:33.914987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fdeb0 00:17:21.768 [2024-05-14 23:04:33.916326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.768 [2024-05-14 23:04:33.916358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:21.768 [2024-05-14 23:04:33.926114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ed920 00:17:21.768 [2024-05-14 23:04:33.927235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.768 [2024-05-14 23:04:33.927266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:21.768 [2024-05-14 23:04:33.937789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fdeb0 00:17:21.768 [2024-05-14 23:04:33.938682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.768 [2024-05-14 23:04:33.938715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:21.768 [2024-05-14 23:04:33.952060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eea00 00:17:21.768 [2024-05-14 23:04:33.953754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.768 [2024-05-14 23:04:33.953794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:21.768 [2024-05-14 23:04:33.960573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e73e0 00:17:21.768 [2024-05-14 23:04:33.961317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.768 [2024-05-14 23:04:33.961349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:21.768 [2024-05-14 23:04:33.974983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190feb58 00:17:21.768 [2024-05-14 23:04:33.976385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.768 [2024-05-14 23:04:33.976418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:21.768 [2024-05-14 23:04:33.987068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eaab8 00:17:21.768 [2024-05-14 23:04:33.988459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.768 [2024-05-14 23:04:33.988489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:21.768 [2024-05-14 23:04:33.998395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ddc00 00:17:21.768 [2024-05-14 23:04:33.999646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.768 [2024-05-14 23:04:33.999678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:21.768 [2024-05-14 23:04:34.010601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f7100 00:17:21.769 [2024-05-14 23:04:34.011511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.011544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.022026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fb8b8 00:17:21.769 [2024-05-14 23:04:34.022818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.022849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.033275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e3498 00:17:21.769 [2024-05-14 23:04:34.033873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.033901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.046832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eea00 00:17:21.769 [2024-05-14 23:04:34.048230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.048262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.058144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eee38 00:17:21.769 [2024-05-14 23:04:34.059402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.059434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.069381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f1ca0 00:17:21.769 [2024-05-14 23:04:34.070474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.070506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.083627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e12d8 00:17:21.769 [2024-05-14 23:04:34.085564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.085595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.092134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ee190 00:17:21.769 [2024-05-14 23:04:34.093096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.093127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.106403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190feb58 00:17:21.769 [2024-05-14 23:04:34.108036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.108066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.117613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f6458 00:17:21.769 [2024-05-14 23:04:34.119028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.119059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.129247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190de470 00:17:21.769 [2024-05-14 23:04:34.130573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.130604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.140393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ed0b0 00:17:21.769 [2024-05-14 23:04:34.141529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.141559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:21.769 [2024-05-14 23:04:34.152021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e3060 00:17:21.769 [2024-05-14 23:04:34.153072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:21.769 [2024-05-14 23:04:34.153102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.166310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eee38 00:17:22.028 [2024-05-14 23:04:34.168020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.168051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.174804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f6458 00:17:22.028 [2024-05-14 23:04:34.175547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.175579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.186864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f5be8 00:17:22.028 [2024-05-14 23:04:34.187611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.187641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.200820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190eb760 00:17:22.028 [2024-05-14 23:04:34.202061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.202092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.213927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ea680 00:17:22.028 [2024-05-14 23:04:34.215634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.215667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.227489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e95a0 00:17:22.028 [2024-05-14 23:04:34.229373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.229406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.239985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ddc00 00:17:22.028 [2024-05-14 23:04:34.241733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.241777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.251432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e6b70 00:17:22.028 [2024-05-14 23:04:34.253021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.253054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.260238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190df550 00:17:22.028 [2024-05-14 23:04:34.261006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.261038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.274618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f57b0 00:17:22.028 [2024-05-14 23:04:34.276039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.276070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.285739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f3e60 00:17:22.028 [2024-05-14 23:04:34.286924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.286956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.297403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f1868 00:17:22.028 [2024-05-14 23:04:34.298509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.298544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.311694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e23b8 00:17:22.028 [2024-05-14 23:04:34.313513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.313545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.324143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e6300 00:17:22.028 [2024-05-14 23:04:34.326096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.326133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.332677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190efae0 00:17:22.028 [2024-05-14 23:04:34.333672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.333702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.347045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f46d0 00:17:22.028 [2024-05-14 23:04:34.348693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.348724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.358210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190fc998 00:17:22.028 [2024-05-14 23:04:34.359595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.359627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.369863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190ee5c8 00:17:22.028 [2024-05-14 23:04:34.371058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.371089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:22.028 [2024-05-14 23:04:34.381165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190e84c0 00:17:22.028 [2024-05-14 23:04:34.382183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.028 [2024-05-14 23:04:34.382213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:22.029 [2024-05-14 23:04:34.392434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x128fe70) with pdu=0x2000190f8a50 00:17:22.029 [2024-05-14 23:04:34.393332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:22.029 [2024-05-14 23:04:34.393364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:22.029 00:17:22.029 Latency(us) 00:17:22.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.029 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.029 nvme0n1 : 2.00 21250.59 83.01 0.00 0.00 6013.70 2517.18 16324.42 00:17:22.029 =================================================================================================================== 00:17:22.029 Total : 21250.59 83.01 0.00 0.00 6013.70 2517.18 16324.42 00:17:22.029 0 00:17:22.029 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:22.029 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:22.029 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:22.029 | .driver_specific 00:17:22.029 | .nvme_error 00:17:22.029 | .status_code 00:17:22.029 | .command_transient_transport_error' 00:17:22.029 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87425 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87425 ']' 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87425 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87425 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:22.596 killing process with pid 87425 00:17:22.596 Received shutdown signal, test time was about 2.000000 seconds 00:17:22.596 00:17:22.596 Latency(us) 00:17:22.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.596 =================================================================================================================== 00:17:22.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87425' 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87425 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87425 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87506 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87506 /var/tmp/bperf.sock 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 87506 ']' 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:22.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:22.596 23:04:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:22.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:22.854 Zero copy mechanism will not be used. 00:17:22.854 [2024-05-14 23:04:34.998740] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:22.854 [2024-05-14 23:04:34.998863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87506 ] 00:17:22.854 [2024-05-14 23:04:35.138282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.854 [2024-05-14 23:04:35.208743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.112 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:23.112 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:17:23.112 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:23.112 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:23.370 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:23.370 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.370 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:23.370 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.370 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:23.370 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:23.627 nvme0n1 00:17:23.627 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:23.627 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.627 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:23.627 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.627 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:23.627 23:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:23.885 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:23.885 Zero copy mechanism will not be used. 00:17:23.885 Running I/O for 2 seconds... 00:17:23.885 [2024-05-14 23:04:36.068351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.885 [2024-05-14 23:04:36.068678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.885 [2024-05-14 23:04:36.068710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.885 [2024-05-14 23:04:36.073669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.885 [2024-05-14 23:04:36.073992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.885 [2024-05-14 23:04:36.074017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:23.885 [2024-05-14 23:04:36.078910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.885 [2024-05-14 23:04:36.079202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.885 [2024-05-14 23:04:36.079232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:23.885 [2024-05-14 23:04:36.084142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.885 [2024-05-14 23:04:36.084435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.885 [2024-05-14 23:04:36.084464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.089557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.089866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.089891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.094858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.095152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.095180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.100007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.100297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.100326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.105188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.105479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.105516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.110350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.110641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.110671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.115539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.115847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.115876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.120717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.121029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.121058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.125999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.126335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.126375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.131312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.131632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.131661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.136927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.137247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.137276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.142384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.142676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.142706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.147632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.147940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.147968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.152943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.153234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.153262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.158210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.158524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.158553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.163523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.163851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.163879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.168827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.169137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.169165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.174197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.174532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.174561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.179405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.179694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.179723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.184742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.185074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.185103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.190310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.190649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.190679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.195724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.196032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.196061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.200928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.201218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.201249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.206106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.206395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.206426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.211270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.211561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.211590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.216473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.216790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.216821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.221765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.222075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.222104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.226961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.227253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.227281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:23.886 [2024-05-14 23:04:36.232226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.886 [2024-05-14 23:04:36.232515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.886 [2024-05-14 23:04:36.232544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:23.887 [2024-05-14 23:04:36.237414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.887 [2024-05-14 23:04:36.237703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.887 [2024-05-14 23:04:36.237731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.887 [2024-05-14 23:04:36.242656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.887 [2024-05-14 23:04:36.242957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.887 [2024-05-14 23:04:36.242985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:23.887 [2024-05-14 23:04:36.247887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.887 [2024-05-14 23:04:36.248176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.887 [2024-05-14 23:04:36.248204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:23.887 [2024-05-14 23:04:36.253080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.887 [2024-05-14 23:04:36.253376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.887 [2024-05-14 23:04:36.253405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:23.887 [2024-05-14 23:04:36.258291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.887 [2024-05-14 23:04:36.258581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.887 [2024-05-14 23:04:36.258610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:23.887 [2024-05-14 23:04:36.263454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.887 [2024-05-14 23:04:36.263743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.887 [2024-05-14 23:04:36.263782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:23.887 [2024-05-14 23:04:36.268675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.887 [2024-05-14 23:04:36.268993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.887 [2024-05-14 23:04:36.269021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:23.887 [2024-05-14 23:04:36.273920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:23.887 [2024-05-14 23:04:36.274226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:23.887 [2024-05-14 23:04:36.274254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.147 [2024-05-14 23:04:36.279328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.147 [2024-05-14 23:04:36.279625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.147 [2024-05-14 23:04:36.279656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.147 [2024-05-14 23:04:36.284648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.147 [2024-05-14 23:04:36.284978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.147 [2024-05-14 23:04:36.285009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.147 [2024-05-14 23:04:36.289867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.147 [2024-05-14 23:04:36.290159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.147 [2024-05-14 23:04:36.290189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.147 [2024-05-14 23:04:36.295214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.147 [2024-05-14 23:04:36.295544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.147 [2024-05-14 23:04:36.295575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.147 [2024-05-14 23:04:36.300482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.147 [2024-05-14 23:04:36.300788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.147 [2024-05-14 23:04:36.300817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.147 [2024-05-14 23:04:36.305923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.147 [2024-05-14 23:04:36.306214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.147 [2024-05-14 23:04:36.306243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.147 [2024-05-14 23:04:36.311242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.311553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.311582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.316467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.316800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.316832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.321734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.322042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.322072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.327050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.327366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.327403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.332319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.332626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.332654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.337506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.337815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.337846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.342718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.343024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.343054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.347919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.348209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.348238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.353104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.353397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.353427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.358300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.358598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.358630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.363519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.363835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.363864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.368789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.369094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.369122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.374034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.374339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.374367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.379222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.379530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.379562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.384439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.384746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.384787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.389718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.390022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.390051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.394939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.395228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.395256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.400129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.400419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.400450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.405407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.405696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.405719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.410544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.410848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.410877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.415687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.415991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.416020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.420824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.421130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.421161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.426047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.426337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.426366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.431228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.431540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.431568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.436375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.148 [2024-05-14 23:04:36.436671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.148 [2024-05-14 23:04:36.436702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.148 [2024-05-14 23:04:36.441598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.441903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.441933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.446970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.447266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.447294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.452612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.452914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.452941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.457657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.457948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.457972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.462615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.462901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.462930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.467605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.467892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.467924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.472519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.472804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.472832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.477492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.477779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.477808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.482475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.482747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.482793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.487433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.487712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.487743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.492402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.492672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.492700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.497409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.497683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.497712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.502399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.502686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.502715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.507445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.507726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.507754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.512395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.512669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.512692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.517399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.517671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.517695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.522319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.522590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.522624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.527048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.527374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.527417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.531845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.532117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.532163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.149 [2024-05-14 23:04:36.536734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.149 [2024-05-14 23:04:36.537027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.149 [2024-05-14 23:04:36.537057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.409 [2024-05-14 23:04:36.541610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.409 [2024-05-14 23:04:36.541878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.409 [2024-05-14 23:04:36.541907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.409 [2024-05-14 23:04:36.546463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.409 [2024-05-14 23:04:36.546718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.409 [2024-05-14 23:04:36.546746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.409 [2024-05-14 23:04:36.551156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.409 [2024-05-14 23:04:36.551398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.409 [2024-05-14 23:04:36.551427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.409 [2024-05-14 23:04:36.555839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.409 [2024-05-14 23:04:36.556102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.409 [2024-05-14 23:04:36.556130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.409 [2024-05-14 23:04:36.560552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.409 [2024-05-14 23:04:36.560826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.409 [2024-05-14 23:04:36.560854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.409 [2024-05-14 23:04:36.565272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.409 [2024-05-14 23:04:36.565515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.409 [2024-05-14 23:04:36.565543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.409 [2024-05-14 23:04:36.569908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.409 [2024-05-14 23:04:36.570172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.570202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.574578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.574837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.574861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.579202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.579463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.579486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.583864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.584107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.584135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.588548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.588803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.588831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.593156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.593442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.593471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.597865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.598107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.598135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.602522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.602799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.602840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.607323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.607598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.607627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.612287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.612551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.612580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.617254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.617519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.617549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.622010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.622262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.622289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.626813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.627069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.627097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.631525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.631786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.631844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.636211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.636502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.636530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.640928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.641191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.641217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.645650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.645935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.645966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.650432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.650719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.650748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.655175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.655435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.655464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.659983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.660244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.660277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.664690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.664966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.664996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.669516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.669781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.669823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.674288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.674526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.674555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.678955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.679200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.679229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.683595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.683878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.683907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.688353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.688595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.688618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.692989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.693232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.693261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.697662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.410 [2024-05-14 23:04:36.697939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.410 [2024-05-14 23:04:36.697964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.410 [2024-05-14 23:04:36.702338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.702580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.702602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.707128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.707370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.707399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.712079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.712338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.712366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.716858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.717129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.717157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.721555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.721818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.721847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.726282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.726523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.726556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.730985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.731229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.731258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.735732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.736016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.736047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.740443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.740690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.740719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.745127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.745370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.745398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.749741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.749997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.750026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.754386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.754625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.754653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.759070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.759312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.759340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.763679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.763943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.763971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.768377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.768619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.768653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.773059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.773298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.773326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.777692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.777948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.777977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.782326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.782566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.782594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.786966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.787209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.787238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.791589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.791868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.791892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.411 [2024-05-14 23:04:36.796236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.411 [2024-05-14 23:04:36.796487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.411 [2024-05-14 23:04:36.796516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.801189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.801435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.801458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.806219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.806490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.806519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.811146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.811409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.811438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.816034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.816295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.816324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.821048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.821301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.821331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.825718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.825997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.826024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.830464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.830707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.830735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.835112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.835354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.835383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.839754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.840009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.840037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.844443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.844685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.844714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.849187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.849427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.849455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.853747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.854029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.671 [2024-05-14 23:04:36.854058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.671 [2024-05-14 23:04:36.858369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.671 [2024-05-14 23:04:36.858613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.858643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.863026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.863284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.863313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.867740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.867994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.868024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.872311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.872553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.872582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.876954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.877215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.877245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.881606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.881867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.881897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.886258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.886497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.886526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.890918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.891162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.891191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.895592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.895851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.895879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.900256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.900494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.900523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.904863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.905137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.905167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.909455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.909699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.909728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.914065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.914326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.914355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.918821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.919066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.919094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.923511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.923755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.923794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.928102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.928359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.928388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.932722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.932992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.933021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.937364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.937608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.937637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.941971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.942235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.942263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.946651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.946912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.946941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.951401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.951662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.951690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.956091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.956330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.956358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.960734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.961020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.961048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.965506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.965791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.965820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.970497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.970746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.672 [2024-05-14 23:04:36.970792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.672 [2024-05-14 23:04:36.975244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.672 [2024-05-14 23:04:36.975486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:36.975516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:36.980190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:36.980460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:36.980492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:36.985101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:36.985331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:36.985362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:36.989712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:36.989969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:36.989999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:36.994306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:36.994541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:36.994570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:36.998978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:36.999212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:36.999239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.003570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.003821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.003850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.008172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.008397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.008431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.012830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.013071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.013103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.017399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.017618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.017647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.022053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.022267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.022297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.026605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.026833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.026861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.031169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.031382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.031412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.035721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.035951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.035979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.040412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.040626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.040668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.045059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.045287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.045316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.049656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.049883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.049911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.054263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.054476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.054505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.673 [2024-05-14 23:04:37.058917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.673 [2024-05-14 23:04:37.059131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.673 [2024-05-14 23:04:37.059172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.932 [2024-05-14 23:04:37.063853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.932 [2024-05-14 23:04:37.064087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.064115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.068708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.068960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.068989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.073425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.073690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.073734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.078266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.078481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.078515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.082818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.083027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.083071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.087389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.087600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.087631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.092020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.092233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.092263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.096652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.096878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.096918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.101299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.101496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.101526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.105912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.106112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.106142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.110460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.110659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.110697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.115152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.115360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.115390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.119723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.119948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.119977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.124335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.124532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.124561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.128943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.129154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.129177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.133560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.133781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.133810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.138189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.138387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.138416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.142713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.142924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.142953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.147279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.147489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.147518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.151905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.152117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.152148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.156449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.156657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.156686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.161105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.161328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.161357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.165714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.165937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.165969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.170423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.170649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.170676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.175032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.175248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.175276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.179655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.179873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.179903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.184248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.184442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.184465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.188850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.933 [2024-05-14 23:04:37.189074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.933 [2024-05-14 23:04:37.189096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.933 [2024-05-14 23:04:37.193468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.193666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.193688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.198042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.198239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.198262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.202608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.202828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.202852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.207254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.207468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.207490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.211874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.212070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.212098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.216419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.216616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.216645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.221053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.221262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.221285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.225690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.225904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.225927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.230367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.230570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.230599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.234955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.235155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.235184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.239547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.239753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.239792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.244149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.244350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.244373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.248731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.248954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.248986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.253345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.253555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.253585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.257918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.258129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.258152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.262562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.262769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.262803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.267152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.267364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.267393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.271864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.272079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.272107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.276712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.276943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.276973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.281461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.281667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.281695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.286153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.286379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.286408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.290799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.290998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.291026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.295364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.295561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.295589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.934 [2024-05-14 23:04:37.299934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.934 [2024-05-14 23:04:37.300132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.934 [2024-05-14 23:04:37.300160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.935 [2024-05-14 23:04:37.304509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.935 [2024-05-14 23:04:37.304705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.935 [2024-05-14 23:04:37.304734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:24.935 [2024-05-14 23:04:37.309007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.935 [2024-05-14 23:04:37.309218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.935 [2024-05-14 23:04:37.309241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:24.935 [2024-05-14 23:04:37.313597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.935 [2024-05-14 23:04:37.313819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.935 [2024-05-14 23:04:37.313842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:24.935 [2024-05-14 23:04:37.318211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.935 [2024-05-14 23:04:37.318416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.935 [2024-05-14 23:04:37.318446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:24.935 [2024-05-14 23:04:37.323008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:24.935 [2024-05-14 23:04:37.323208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:24.935 [2024-05-14 23:04:37.323236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.197 [2024-05-14 23:04:37.327716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.197 [2024-05-14 23:04:37.327942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.197 [2024-05-14 23:04:37.327972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.197 [2024-05-14 23:04:37.332649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.197 [2024-05-14 23:04:37.332882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.197 [2024-05-14 23:04:37.332905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.197 [2024-05-14 23:04:37.337575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.197 [2024-05-14 23:04:37.337798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.197 [2024-05-14 23:04:37.337827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.197 [2024-05-14 23:04:37.342205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.197 [2024-05-14 23:04:37.342421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.197 [2024-05-14 23:04:37.342449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.197 [2024-05-14 23:04:37.346750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.197 [2024-05-14 23:04:37.346968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.197 [2024-05-14 23:04:37.346996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.197 [2024-05-14 23:04:37.351388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.197 [2024-05-14 23:04:37.351583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.197 [2024-05-14 23:04:37.351606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.197 [2024-05-14 23:04:37.355933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.197 [2024-05-14 23:04:37.356140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.197 [2024-05-14 23:04:37.356163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.197 [2024-05-14 23:04:37.360450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.197 [2024-05-14 23:04:37.360657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.197 [2024-05-14 23:04:37.360680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.197 [2024-05-14 23:04:37.365090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.197 [2024-05-14 23:04:37.365297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.197 [2024-05-14 23:04:37.365320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.197 [2024-05-14 23:04:37.369747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.197 [2024-05-14 23:04:37.369964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.369988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.374404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.374610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.374632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.379029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.379227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.379257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.383609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.383831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.383859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.388173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.388370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.388393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.392752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.392983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.393006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.397384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.397593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.397624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.402069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.402303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.402336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.406738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.406977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.407010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.411379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.411596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.411623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.416105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.416311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.416337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.420929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.421140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.421169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.425887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.426083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.426112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.430480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.430690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.430719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.435173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.435370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.435392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.439848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.440058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.440081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.444465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.444659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.444681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.449161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.449359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.449382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.453766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.453979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.454002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.458388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.458595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.458617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.462910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.463123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.463152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.467608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.467830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.467853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.472334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.472530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.472553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.476925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.477138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.477168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.481493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.481690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.481714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.486136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.486382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.486408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.490720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.490957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.491002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.198 [2024-05-14 23:04:37.495193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.198 [2024-05-14 23:04:37.495427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.198 [2024-05-14 23:04:37.495462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.499773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.499975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.500000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.504344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.504535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.504558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.508969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.509169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.509198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.513476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.513660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.513689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.518074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.518253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.518276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.522704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.522957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.522994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.527306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.527489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.527514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.531921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.532115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.532140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.536542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.536735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.536775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.541144] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.541391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.541428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.545671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.545871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.545899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.550288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.550475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.550501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.554962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.555192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.555233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.559488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.559656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.559681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.564134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.564334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.564359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.568685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.568888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.568926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.573377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.573565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.573590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.577902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.578070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.578094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.199 [2024-05-14 23:04:37.582435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.199 [2024-05-14 23:04:37.582621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.199 [2024-05-14 23:04:37.582645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.587322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.587501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.587528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.593497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.593690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.593715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.600465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.600637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.600661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.605093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.605273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.605298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.609671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.609851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.609876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.614310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.614520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.614545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.619029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.619257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.619287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.623706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.623903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.623934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.628271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.628454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.628483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.632878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.633056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.633085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.637415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.637588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.637620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.643589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.643814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.643838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.650115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.650286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.650310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.654744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.654941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.654972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.659335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.659520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.659543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.664165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.664358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.664381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.671215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.671413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.671439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.676573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.676740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.676777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.681261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.681441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.681465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.685894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.686079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.686106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.690516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.690699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.690727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.695181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.695368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.695391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.699783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.699953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.699976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.704414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.704615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.704638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.708962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.709129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.709154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.713575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.476 [2024-05-14 23:04:37.713740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.476 [2024-05-14 23:04:37.713763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.476 [2024-05-14 23:04:37.718158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.718348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.718371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.722721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.722920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.722943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.727373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.727548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.727570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.731992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.732189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.732211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.736541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.736717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.736742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.741164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.741340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.741363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.745768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.745951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.745974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.750393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.750583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.750607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.755001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.755165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.755189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.759642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.759831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.759854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.764247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.764414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.764437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.768894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.769068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.769090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.773421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.773599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.773623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.778124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.778290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.778312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.782740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.782925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.782948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.787371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.787552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.787574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.792061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.792242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.792266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.796690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.796868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.796891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.801302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.801505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.801528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.805962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.806136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.806159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.810546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.810713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.810736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.815195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.815388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.815411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.819866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.820036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.820060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.824538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.824732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.824773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.829166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.829347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.829370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.833813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.833989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.834012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.838350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.838518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.838540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.842973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.477 [2024-05-14 23:04:37.843163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.477 [2024-05-14 23:04:37.843193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.477 [2024-05-14 23:04:37.847536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.478 [2024-05-14 23:04:37.847698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.478 [2024-05-14 23:04:37.847721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.478 [2024-05-14 23:04:37.852082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.478 [2024-05-14 23:04:37.852260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.478 [2024-05-14 23:04:37.852284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.478 [2024-05-14 23:04:37.856650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.478 [2024-05-14 23:04:37.856848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.478 [2024-05-14 23:04:37.856872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.478 [2024-05-14 23:04:37.861276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.478 [2024-05-14 23:04:37.861440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.478 [2024-05-14 23:04:37.861465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.478 [2024-05-14 23:04:37.866085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.478 [2024-05-14 23:04:37.866266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.478 [2024-05-14 23:04:37.866295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.870920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.871092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.871122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.875718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.875936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.875964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.880399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.880584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.880613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.885292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.885490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.885519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.890132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.890300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.890323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.894740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.894947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.894970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.899290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.899479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.899503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.903857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.904050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.904073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.908395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.908562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.908585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.912991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.913156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.913179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.917635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.917825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.917848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.922264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.922433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.922456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.926907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.927086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.927108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.931493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.931658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.931681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.936053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.936220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.936243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.940648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.940853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.940876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.945416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.945589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.945613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.950292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.950471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.950495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.954878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.955049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.737 [2024-05-14 23:04:37.955071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.737 [2024-05-14 23:04:37.959547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.737 [2024-05-14 23:04:37.959714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:37.959737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:37.964108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:37.964276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:37.964298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:37.968675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:37.968880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:37.968903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:37.973219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:37.973396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:37.973418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:37.977907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:37.978077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:37.978100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:37.982515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:37.982698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:37.982725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:37.987189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:37.987399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:37.987427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:37.991797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:37.992043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:37.992083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:37.996376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:37.996565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:37.996611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.000715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.000811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.000838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.005306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.005382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.005408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.009956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.010031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.010056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.014540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.014642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.014666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.019165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.019256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.019280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.023643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.023722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.023747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.028310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.028403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.028427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.033114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.033209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.033233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.038039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.038116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.038138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.042676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.042767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.042803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.047312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.047385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.047407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.051966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.052057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.052080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.056588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.056670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.056693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:25.738 [2024-05-14 23:04:38.061180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c9370) with pdu=0x2000190fef90 00:17:25.738 [2024-05-14 23:04:38.061251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:25.738 [2024-05-14 23:04:38.061275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:25.738 00:17:25.738 Latency(us) 00:17:25.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.738 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:25.738 nvme0n1 : 2.00 6443.35 805.42 0.00 0.00 2476.64 1995.87 6851.49 00:17:25.738 =================================================================================================================== 00:17:25.738 Total : 6443.35 805.42 0.00 0.00 2476.64 1995.87 6851.49 00:17:25.738 0 00:17:25.738 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:25.738 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:25.738 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:25.738 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:25.738 | .driver_specific 00:17:25.738 | .nvme_error 00:17:25.738 | .status_code 00:17:25.738 | .command_transient_transport_error' 00:17:25.996 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 416 > 0 )) 00:17:25.996 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87506 00:17:25.996 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87506 ']' 00:17:25.996 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87506 00:17:25.996 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:17:25.996 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:25.996 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87506 00:17:26.254 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:26.254 killing process with pid 87506 00:17:26.254 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:26.254 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87506' 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87506 00:17:26.255 Received shutdown signal, test time was about 2.000000 seconds 00:17:26.255 00:17:26.255 Latency(us) 00:17:26.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.255 =================================================================================================================== 00:17:26.255 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87506 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 87232 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 87232 ']' 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 87232 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87232 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:26.255 killing process with pid 87232 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87232' 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 87232 00:17:26.255 [2024-05-14 23:04:38.609520] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:26.255 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 87232 00:17:26.513 00:17:26.513 real 0m15.857s 00:17:26.513 user 0m30.894s 00:17:26.513 sys 0m4.112s 00:17:26.513 ************************************ 00:17:26.513 END TEST nvmf_digest_error 00:17:26.513 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:26.513 23:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:26.513 ************************************ 00:17:26.513 23:04:38 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:26.513 23:04:38 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:26.513 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.513 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:26.513 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.513 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:26.513 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.513 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.773 rmmod nvme_tcp 00:17:26.773 rmmod nvme_fabrics 00:17:26.773 rmmod nvme_keyring 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 87232 ']' 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 87232 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 87232 ']' 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 87232 00:17:26.773 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (87232) - No such process 00:17:26.773 Process with pid 87232 is not found 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 87232 is not found' 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:26.773 00:17:26.773 real 0m34.029s 00:17:26.773 user 1m4.690s 00:17:26.773 sys 0m8.619s 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:26.773 23:04:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:26.773 ************************************ 00:17:26.773 END TEST nvmf_digest 00:17:26.773 ************************************ 00:17:26.773 23:04:39 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 1 -eq 1 ]] 00:17:26.773 23:04:39 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ tcp == \t\c\p ]] 00:17:26.773 23:04:39 nvmf_tcp -- nvmf/nvmf.sh@111 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:17:26.773 23:04:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:26.773 23:04:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:26.773 23:04:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.773 ************************************ 00:17:26.773 START TEST nvmf_mdns_discovery 00:17:26.773 ************************************ 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:17:26.773 * Looking for test storage... 00:17:26.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.773 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:26.774 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:27.033 Cannot find device "nvmf_tgt_br" 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.033 Cannot find device "nvmf_tgt_br2" 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:27.033 Cannot find device "nvmf_tgt_br" 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:27.033 Cannot find device "nvmf_tgt_br2" 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:27.033 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:27.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:17:27.292 00:17:27.292 --- 10.0.0.2 ping statistics --- 00:17:27.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.292 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:27.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:27.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:27.292 00:17:27.292 --- 10.0.0.3 ping statistics --- 00:17:27.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.292 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:27.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:27.292 00:17:27.292 --- 10.0.0.1 ping statistics --- 00:17:27.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.292 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=87794 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 87794 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 87794 ']' 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:27.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:27.292 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.292 [2024-05-14 23:04:39.572849] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:27.292 [2024-05-14 23:04:39.572949] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.550 [2024-05-14 23:04:39.710951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.550 [2024-05-14 23:04:39.780560] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.550 [2024-05-14 23:04:39.780630] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.550 [2024-05-14 23:04:39.780645] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.550 [2024-05-14 23:04:39.780656] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.550 [2024-05-14 23:04:39.780664] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.550 [2024-05-14 23:04:39.780703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.550 [2024-05-14 23:04:39.922143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.550 [2024-05-14 23:04:39.930084] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:27.550 [2024-05-14 23:04:39.930356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.550 null0 00:17:27.550 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.808 null1 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.808 null2 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.808 null3 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # hostpid=87831 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # waitforlisten 87831 /tmp/host.sock 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 87831 ']' 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:27.808 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.808 23:04:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:27.808 [2024-05-14 23:04:40.025177] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:27.808 [2024-05-14 23:04:40.025261] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87831 ] 00:17:27.808 [2024-05-14 23:04:40.165397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.067 [2024-05-14 23:04:40.249353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.002 23:04:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:29.002 23:04:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:17:29.002 23:04:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:17:29.002 23:04:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:17:29.002 23:04:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:17:29.002 23:04:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # avahipid=87860 00:17:29.002 23:04:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # sleep 1 00:17:29.002 23:04:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:17:29.002 23:04:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:17:29.002 Process 1006 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:17:29.002 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:17:29.002 Successfully dropped root privileges. 00:17:29.002 avahi-daemon 0.8 starting up. 00:17:29.002 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:17:29.002 Successfully called chroot(). 00:17:29.002 Successfully dropped remaining capabilities. 00:17:29.002 No service file found in /etc/avahi/services. 00:17:29.938 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:17:29.938 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:17:29.938 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:17:29.938 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:17:29.938 Network interface enumeration completed. 00:17:29.938 Registering new address record for fe80::4cd4:6cff:fefe:7d2c on nvmf_tgt_if2.*. 00:17:29.938 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:17:29.938 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:17:29.938 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:17:29.938 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 2207536098. 00:17:29.938 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:29.938 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.938 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.938 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.938 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:17:29.938 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # notify_id=0 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:17:29.939 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.199 [2024-05-14 23:04:42.459523] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:17:30.199 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.200 [2024-05-14 23:04:42.522982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.200 [2024-05-14 23:04:42.562957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.200 [2024-05-14 23:04:42.570880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # avahi_clientpid=87911 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:17:30.200 23:04:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:17:31.133 Established under name 'CDC' 00:17:31.133 [2024-05-14 23:04:43.359521] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:17:31.391 [2024-05-14 23:04:43.759560] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:17:31.391 [2024-05-14 23:04:43.759607] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:17:31.391 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:17:31.391 cookie is 0 00:17:31.391 is_local: 1 00:17:31.391 our_own: 0 00:17:31.391 wide_area: 0 00:17:31.391 multicast: 1 00:17:31.391 cached: 1 00:17:31.666 [2024-05-14 23:04:43.859550] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:17:31.667 [2024-05-14 23:04:43.859597] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:17:31.667 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:17:31.667 cookie is 0 00:17:31.667 is_local: 1 00:17:31.667 our_own: 0 00:17:31.667 wide_area: 0 00:17:31.667 multicast: 1 00:17:31.667 cached: 1 00:17:32.623 [2024-05-14 23:04:44.767179] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:32.623 [2024-05-14 23:04:44.767251] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:32.623 [2024-05-14 23:04:44.767287] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:32.623 [2024-05-14 23:04:44.853359] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:17:32.623 [2024-05-14 23:04:44.866594] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:32.623 [2024-05-14 23:04:44.866628] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:32.623 [2024-05-14 23:04:44.866648] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:32.623 [2024-05-14 23:04:44.914469] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:17:32.623 [2024-05-14 23:04:44.914520] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:17:32.623 [2024-05-14 23:04:44.952736] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:17:32.623 [2024-05-14 23:04:45.007923] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:17:32.623 [2024-05-14 23:04:45.007973] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:35.905 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:17:35.905 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:17:35.905 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # sort 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # xargs 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # sort 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # xargs 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 23:04:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=2 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=2 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.906 23:04:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:17:36.837 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:17:36.837 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:36.837 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=2 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=4 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.838 [2024-05-14 23:04:49.161934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:36.838 [2024-05-14 23:04:49.162796] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:36.838 [2024-05-14 23:04:49.162982] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:36.838 [2024-05-14 23:04:49.163153] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:36.838 [2024-05-14 23:04:49.163175] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.838 [2024-05-14 23:04:49.173894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:36.838 [2024-05-14 23:04:49.174807] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:36.838 [2024-05-14 23:04:49.175018] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.838 23:04:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:17:37.095 [2024-05-14 23:04:49.307926] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:17:37.095 [2024-05-14 23:04:49.308360] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:17:37.095 [2024-05-14 23:04:49.369782] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:17:37.095 [2024-05-14 23:04:49.370007] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:17:37.095 [2024-05-14 23:04:49.370024] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:17:37.095 [2024-05-14 23:04:49.370057] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:37.095 [2024-05-14 23:04:49.370419] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:17:37.095 [2024-05-14 23:04:49.370433] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:37.095 [2024-05-14 23:04:49.370440] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:37.095 [2024-05-14 23:04:49.370457] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:37.095 [2024-05-14 23:04:49.415168] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:17:37.095 [2024-05-14 23:04:49.415198] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:17:37.095 [2024-05-14 23:04:49.416155] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:37.095 [2024-05-14 23:04:49.416172] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:17:38.028 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=0 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=4 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.288 [2024-05-14 23:04:50.503680] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:38.288 [2024-05-14 23:04:50.503891] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:38.288 [2024-05-14 23:04:50.504078] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:38.288 [2024-05-14 23:04:50.504286] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:38.288 [2024-05-14 23:04:50.507169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.288 id:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-05-14 23:04:50.507213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.288 [2024-05-14 23:04:50.507229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-05-14 23:04:50.507239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.288 [2024-05-14 23:04:50.507250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-05-14 23:04:50.507259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.288 [2024-05-14 23:04:50.507269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-05-14 23:04:50.507278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.288 [2024-05-14 23:04:50.507287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:38.288 [2024-05-14 23:04:50.511671] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:38.288 [2024-05-14 23:04:50.511735] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:38.288 [2024-05-14 23:04:50.513395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-05-14 23:04:50.513433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.288 [2024-05-14 23:04:50.513447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-05-14 23:04:50.513457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.288 [2024-05-14 23:04:50.513467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-05-14 23:04:50.513476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.288 [2024-05-14 23:04:50.513486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.288 [2024-05-14 23:04:50.513495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.288 [2024-05-14 23:04:50.513514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.288 23:04:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:17:38.288 [2024-05-14 23:04:50.517135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.288 [2024-05-14 23:04:50.523356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.288 [2024-05-14 23:04:50.527146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.288 [2024-05-14 23:04:50.527275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.288 [2024-05-14 23:04:50.527329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.288 [2024-05-14 23:04:50.527346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.288 [2024-05-14 23:04:50.527358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.288 [2024-05-14 23:04:50.527376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.288 [2024-05-14 23:04:50.527393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.288 [2024-05-14 23:04:50.527403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.288 [2024-05-14 23:04:50.527414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.288 [2024-05-14 23:04:50.527430] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.288 [2024-05-14 23:04:50.533367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.288 [2024-05-14 23:04:50.533474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.288 [2024-05-14 23:04:50.533526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.288 [2024-05-14 23:04:50.533543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.288 [2024-05-14 23:04:50.533554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.288 [2024-05-14 23:04:50.533572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.288 [2024-05-14 23:04:50.533587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.288 [2024-05-14 23:04:50.533596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.288 [2024-05-14 23:04:50.533606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.288 [2024-05-14 23:04:50.533621] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.288 [2024-05-14 23:04:50.537212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.288 [2024-05-14 23:04:50.537304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.288 [2024-05-14 23:04:50.537353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.537370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.289 [2024-05-14 23:04:50.537380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.289 [2024-05-14 23:04:50.537397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.289 [2024-05-14 23:04:50.537413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.289 [2024-05-14 23:04:50.537422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.289 [2024-05-14 23:04:50.537432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.289 [2024-05-14 23:04:50.537447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.289 [2024-05-14 23:04:50.543437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.289 [2024-05-14 23:04:50.543546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.543596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.543613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.289 [2024-05-14 23:04:50.543624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.289 [2024-05-14 23:04:50.543641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.289 [2024-05-14 23:04:50.543657] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.289 [2024-05-14 23:04:50.543666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.289 [2024-05-14 23:04:50.543676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.289 [2024-05-14 23:04:50.543710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.289 [2024-05-14 23:04:50.547269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.289 [2024-05-14 23:04:50.547361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.547410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.547427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.289 [2024-05-14 23:04:50.547438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.289 [2024-05-14 23:04:50.547454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.289 [2024-05-14 23:04:50.547469] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.289 [2024-05-14 23:04:50.547478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.289 [2024-05-14 23:04:50.547487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.289 [2024-05-14 23:04:50.547502] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.289 [2024-05-14 23:04:50.553500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.289 [2024-05-14 23:04:50.553596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.553645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.553662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.289 [2024-05-14 23:04:50.553673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.289 [2024-05-14 23:04:50.553689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.289 [2024-05-14 23:04:50.553721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.289 [2024-05-14 23:04:50.553732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.289 [2024-05-14 23:04:50.553742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.289 [2024-05-14 23:04:50.553757] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.289 [2024-05-14 23:04:50.557331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.289 [2024-05-14 23:04:50.557423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.557473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.557490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.289 [2024-05-14 23:04:50.557500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.289 [2024-05-14 23:04:50.557518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.289 [2024-05-14 23:04:50.557533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.289 [2024-05-14 23:04:50.557542] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.289 [2024-05-14 23:04:50.557551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.289 [2024-05-14 23:04:50.557567] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.289 [2024-05-14 23:04:50.563582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.289 [2024-05-14 23:04:50.563739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.563841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.563873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.289 [2024-05-14 23:04:50.563893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.289 [2024-05-14 23:04:50.563925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.289 [2024-05-14 23:04:50.563983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.289 [2024-05-14 23:04:50.564003] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.289 [2024-05-14 23:04:50.564019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.289 [2024-05-14 23:04:50.564046] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.289 [2024-05-14 23:04:50.567397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.289 [2024-05-14 23:04:50.567511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.567564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.567582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.289 [2024-05-14 23:04:50.567593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.289 [2024-05-14 23:04:50.567611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.289 [2024-05-14 23:04:50.567627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.289 [2024-05-14 23:04:50.567637] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.289 [2024-05-14 23:04:50.567647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.289 [2024-05-14 23:04:50.567663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.289 [2024-05-14 23:04:50.573685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.289 [2024-05-14 23:04:50.573811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.573870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.573888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.289 [2024-05-14 23:04:50.573899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.289 [2024-05-14 23:04:50.573917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.289 [2024-05-14 23:04:50.573933] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.289 [2024-05-14 23:04:50.573942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.289 [2024-05-14 23:04:50.573952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.289 [2024-05-14 23:04:50.573989] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.289 [2024-05-14 23:04:50.577472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.289 [2024-05-14 23:04:50.577573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.577624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.289 [2024-05-14 23:04:50.577641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.289 [2024-05-14 23:04:50.577652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.290 [2024-05-14 23:04:50.577669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.290 [2024-05-14 23:04:50.577688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.290 [2024-05-14 23:04:50.577705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.290 [2024-05-14 23:04:50.577719] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.290 [2024-05-14 23:04:50.577736] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.290 [2024-05-14 23:04:50.583774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.290 [2024-05-14 23:04:50.583880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.583931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.583948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.290 [2024-05-14 23:04:50.583960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.290 [2024-05-14 23:04:50.583978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.290 [2024-05-14 23:04:50.584013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.290 [2024-05-14 23:04:50.584024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.290 [2024-05-14 23:04:50.584034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.290 [2024-05-14 23:04:50.584051] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.290 [2024-05-14 23:04:50.587537] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.290 [2024-05-14 23:04:50.587634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.587691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.587716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.290 [2024-05-14 23:04:50.587729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.290 [2024-05-14 23:04:50.587746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.290 [2024-05-14 23:04:50.587776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.290 [2024-05-14 23:04:50.587788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.290 [2024-05-14 23:04:50.587798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.290 [2024-05-14 23:04:50.587815] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.290 [2024-05-14 23:04:50.593847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.290 [2024-05-14 23:04:50.593949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.594001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.594018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.290 [2024-05-14 23:04:50.594029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.290 [2024-05-14 23:04:50.594047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.290 [2024-05-14 23:04:50.594081] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.290 [2024-05-14 23:04:50.594092] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.290 [2024-05-14 23:04:50.594102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.290 [2024-05-14 23:04:50.594118] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.290 [2024-05-14 23:04:50.597599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.290 [2024-05-14 23:04:50.597689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.597738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.597755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.290 [2024-05-14 23:04:50.597792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.290 [2024-05-14 23:04:50.597812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.290 [2024-05-14 23:04:50.597827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.290 [2024-05-14 23:04:50.597836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.290 [2024-05-14 23:04:50.597846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.290 [2024-05-14 23:04:50.597861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.290 [2024-05-14 23:04:50.603917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.290 [2024-05-14 23:04:50.604027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.604078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.604095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.290 [2024-05-14 23:04:50.604106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.290 [2024-05-14 23:04:50.604123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.290 [2024-05-14 23:04:50.604172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.290 [2024-05-14 23:04:50.604186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.290 [2024-05-14 23:04:50.604195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.290 [2024-05-14 23:04:50.604212] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.290 [2024-05-14 23:04:50.607657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.290 [2024-05-14 23:04:50.607745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.607816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.607834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.290 [2024-05-14 23:04:50.607845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.290 [2024-05-14 23:04:50.607863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.290 [2024-05-14 23:04:50.607878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.290 [2024-05-14 23:04:50.607887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.290 [2024-05-14 23:04:50.607896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.290 [2024-05-14 23:04:50.607912] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.290 [2024-05-14 23:04:50.613984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.290 [2024-05-14 23:04:50.614076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.614125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.614142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.290 [2024-05-14 23:04:50.614153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.290 [2024-05-14 23:04:50.614170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.290 [2024-05-14 23:04:50.614203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.290 [2024-05-14 23:04:50.614214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.290 [2024-05-14 23:04:50.614223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.290 [2024-05-14 23:04:50.614239] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.290 [2024-05-14 23:04:50.617716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.290 [2024-05-14 23:04:50.617814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.617864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.617881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.290 [2024-05-14 23:04:50.617892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.290 [2024-05-14 23:04:50.617908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.290 [2024-05-14 23:04:50.617923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.290 [2024-05-14 23:04:50.617932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.290 [2024-05-14 23:04:50.617942] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.290 [2024-05-14 23:04:50.617957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.290 [2024-05-14 23:04:50.624045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.290 [2024-05-14 23:04:50.624148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.624197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.290 [2024-05-14 23:04:50.624214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.290 [2024-05-14 23:04:50.624225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.291 [2024-05-14 23:04:50.624242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.291 [2024-05-14 23:04:50.624275] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.291 [2024-05-14 23:04:50.624287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.291 [2024-05-14 23:04:50.624296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.291 [2024-05-14 23:04:50.624311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.291 [2024-05-14 23:04:50.627783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.291 [2024-05-14 23:04:50.627873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.291 [2024-05-14 23:04:50.627922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.291 [2024-05-14 23:04:50.627939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.291 [2024-05-14 23:04:50.627950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.291 [2024-05-14 23:04:50.627966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.291 [2024-05-14 23:04:50.627981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.291 [2024-05-14 23:04:50.627991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.291 [2024-05-14 23:04:50.628000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.291 [2024-05-14 23:04:50.628015] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.291 [2024-05-14 23:04:50.634109] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.291 [2024-05-14 23:04:50.634202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.291 [2024-05-14 23:04:50.634251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.291 [2024-05-14 23:04:50.634268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.291 [2024-05-14 23:04:50.634278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.291 [2024-05-14 23:04:50.634295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.291 [2024-05-14 23:04:50.634328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.291 [2024-05-14 23:04:50.634338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.291 [2024-05-14 23:04:50.634348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.291 [2024-05-14 23:04:50.634363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.291 [2024-05-14 23:04:50.637844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.291 [2024-05-14 23:04:50.637932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.291 [2024-05-14 23:04:50.637981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.291 [2024-05-14 23:04:50.637998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05c40 with addr=10.0.0.2, port=4420 00:17:38.291 [2024-05-14 23:04:50.638008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05c40 is same with the state(5) to be set 00:17:38.291 [2024-05-14 23:04:50.638025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05c40 (9): Bad file descriptor 00:17:38.291 [2024-05-14 23:04:50.638039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.291 [2024-05-14 23:04:50.638048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.291 [2024-05-14 23:04:50.638058] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.291 [2024-05-14 23:04:50.638073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.291 [2024-05-14 23:04:50.643338] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:38.291 [2024-05-14 23:04:50.643378] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:38.291 [2024-05-14 23:04:50.643411] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:38.291 [2024-05-14 23:04:50.644170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:17:38.291 [2024-05-14 23:04:50.644263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.291 [2024-05-14 23:04:50.644313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.291 [2024-05-14 23:04:50.644331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc84e0 with addr=10.0.0.3, port=4420 00:17:38.291 [2024-05-14 23:04:50.644342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc84e0 is same with the state(5) to be set 00:17:38.291 [2024-05-14 23:04:50.644359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc84e0 (9): Bad file descriptor 00:17:38.291 [2024-05-14 23:04:50.644430] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:17:38.291 [2024-05-14 23:04:50.644451] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:17:38.291 [2024-05-14 23:04:50.644471] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:38.291 [2024-05-14 23:04:50.644497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:17:38.291 [2024-05-14 23:04:50.644517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:17:38.291 [2024-05-14 23:04:50.644527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:17:38.291 [2024-05-14 23:04:50.644569] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.613 [2024-05-14 23:04:50.729511] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:38.613 [2024-05-14 23:04:50.730518] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:17:39.178 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:17:39.178 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:39.178 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.178 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.178 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:17:39.178 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:17:39.178 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:17:39.178 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.435 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=0 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=4 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.436 23:04:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:17:39.693 [2024-05-14 23:04:51.859577] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # sort 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # xargs 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.623 23:04:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.623 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=4 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=8 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:40.881 [2024-05-14 23:04:53.055783] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:17:40.881 2024/05/14 23:04:53 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:17:40.881 request: 00:17:40.881 { 00:17:40.881 "method": "bdev_nvme_start_mdns_discovery", 00:17:40.881 "params": { 00:17:40.881 "name": "mdns", 00:17:40.881 "svcname": "_nvme-disc._http", 00:17:40.881 "hostnqn": "nqn.2021-12.io.spdk:test" 00:17:40.881 } 00:17:40.881 } 00:17:40.881 Got JSON-RPC error response 00:17:40.881 GoRPCClient: error on JSON-RPC call 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.881 23:04:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:17:41.138 [2024-05-14 23:04:53.444311] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:17:41.396 [2024-05-14 23:04:53.544306] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:17:41.396 [2024-05-14 23:04:53.644316] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:17:41.396 [2024-05-14 23:04:53.644353] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:17:41.396 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:17:41.396 cookie is 0 00:17:41.396 is_local: 1 00:17:41.396 our_own: 0 00:17:41.396 wide_area: 0 00:17:41.396 multicast: 1 00:17:41.396 cached: 1 00:17:41.396 [2024-05-14 23:04:53.744326] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:17:41.396 [2024-05-14 23:04:53.744375] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:17:41.396 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:17:41.396 cookie is 0 00:17:41.396 is_local: 1 00:17:41.396 our_own: 0 00:17:41.396 wide_area: 0 00:17:41.396 multicast: 1 00:17:41.396 cached: 1 00:17:42.329 [2024-05-14 23:04:54.648626] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:42.330 [2024-05-14 23:04:54.648671] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:42.330 [2024-05-14 23:04:54.648691] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:42.588 [2024-05-14 23:04:54.734775] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:17:42.588 [2024-05-14 23:04:54.748436] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:42.588 [2024-05-14 23:04:54.748474] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:42.588 [2024-05-14 23:04:54.748493] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:42.588 [2024-05-14 23:04:54.795598] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:17:42.588 [2024-05-14 23:04:54.795651] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:17:42.588 [2024-05-14 23:04:54.834781] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:17:42.588 [2024-05-14 23:04:54.894096] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:17:42.588 [2024-05-14 23:04:54.894150] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # sort 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # xargs 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:17:45.866 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # xargs 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # sort 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.867 [2024-05-14 23:04:58.244618] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:17:45.867 2024/05/14 23:04:58 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:17:45.867 request: 00:17:45.867 { 00:17:45.867 "method": "bdev_nvme_start_mdns_discovery", 00:17:45.867 "params": { 00:17:45.867 "name": "cdc", 00:17:45.867 "svcname": "_nvme-disc._tcp", 00:17:45.867 "hostnqn": "nqn.2021-12.io.spdk:test" 00:17:45.867 } 00:17:45.867 } 00:17:45.867 Got JSON-RPC error response 00:17:45.867 GoRPCClient: error on JSON-RPC call 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # xargs 00:17:45.867 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # sort 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # kill 87831 00:17:46.125 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # wait 87831 00:17:46.125 [2024-05-14 23:04:58.452877] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # kill 87911 00:17:46.386 Got SIGTERM, quitting. 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # kill 87860 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:17:46.386 Got SIGTERM, quitting. 00:17:46.386 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:17:46.386 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:17:46.386 avahi-daemon 0.8 exiting. 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:46.386 rmmod nvme_tcp 00:17:46.386 rmmod nvme_fabrics 00:17:46.386 rmmod nvme_keyring 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 87794 ']' 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 87794 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@946 -- # '[' -z 87794 ']' 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # kill -0 87794 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # uname 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87794 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:46.386 killing process with pid 87794 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87794' 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@965 -- # kill 87794 00:17:46.386 [2024-05-14 23:04:58.702245] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:46.386 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # wait 87794 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:46.645 ************************************ 00:17:46.645 END TEST nvmf_mdns_discovery 00:17:46.645 ************************************ 00:17:46.645 00:17:46.645 real 0m19.877s 00:17:46.645 user 0m39.631s 00:17:46.645 sys 0m1.965s 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:46.645 23:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.645 23:04:58 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 1 -eq 1 ]] 00:17:46.645 23:04:58 nvmf_tcp -- nvmf/nvmf.sh@115 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:46.645 23:04:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:46.645 23:04:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:46.645 23:04:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:46.645 ************************************ 00:17:46.645 START TEST nvmf_host_multipath 00:17:46.645 ************************************ 00:17:46.645 23:04:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:46.904 * Looking for test storage... 00:17:46.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.904 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:46.905 Cannot find device "nvmf_tgt_br" 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.905 Cannot find device "nvmf_tgt_br2" 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:46.905 Cannot find device "nvmf_tgt_br" 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:46.905 Cannot find device "nvmf_tgt_br2" 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.905 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.905 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:47.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:17:47.164 00:17:47.164 --- 10.0.0.2 ping statistics --- 00:17:47.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.164 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:47.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:47.164 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:47.164 00:17:47.164 --- 10.0.0.3 ping statistics --- 00:17:47.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.164 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:47.164 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:47.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:47.165 00:17:47.165 --- 10.0.0.1 ping statistics --- 00:17:47.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.165 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=88421 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 88421 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 88421 ']' 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:47.165 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:47.165 [2024-05-14 23:04:59.493837] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:47.165 [2024-05-14 23:04:59.493921] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.424 [2024-05-14 23:04:59.628921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:47.424 [2024-05-14 23:04:59.699322] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.424 [2024-05-14 23:04:59.699379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.424 [2024-05-14 23:04:59.699392] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.424 [2024-05-14 23:04:59.699402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.424 [2024-05-14 23:04:59.699411] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.424 [2024-05-14 23:04:59.699546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.424 [2024-05-14 23:04:59.699556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.424 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:47.424 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:17:47.424 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.424 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.424 23:04:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:47.683 23:04:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.683 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=88421 00:17:47.683 23:04:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:47.942 [2024-05-14 23:05:00.083091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.942 23:05:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:48.200 Malloc0 00:17:48.200 23:05:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:48.458 23:05:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:48.716 23:05:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.716 [2024-05-14 23:05:01.082898] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:48.716 [2024-05-14 23:05:01.083179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.716 23:05:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:49.283 [2024-05-14 23:05:01.383255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:49.283 23:05:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=88504 00:17:49.283 23:05:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:49.283 23:05:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:49.283 23:05:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 88504 /var/tmp/bdevperf.sock 00:17:49.283 23:05:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 88504 ']' 00:17:49.283 23:05:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.283 23:05:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:49.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.283 23:05:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.283 23:05:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:49.283 23:05:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:50.217 23:05:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:50.217 23:05:02 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:17:50.217 23:05:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:50.476 23:05:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:50.734 Nvme0n1 00:17:50.734 23:05:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:50.992 Nvme0n1 00:17:51.250 23:05:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:51.250 23:05:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:52.187 23:05:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:52.187 23:05:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:52.445 23:05:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:52.704 23:05:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:52.704 23:05:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88597 00:17:52.704 23:05:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:52.704 23:05:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:59.322 23:05:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:59.322 23:05:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:59.322 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:59.322 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:59.322 Attaching 4 probes... 00:17:59.322 @path[10.0.0.2, 4421]: 16800 00:17:59.323 @path[10.0.0.2, 4421]: 17216 00:17:59.323 @path[10.0.0.2, 4421]: 16808 00:17:59.323 @path[10.0.0.2, 4421]: 17295 00:17:59.323 @path[10.0.0.2, 4421]: 17125 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88597 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88729 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:59.323 23:05:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:05.889 Attaching 4 probes... 00:18:05.889 @path[10.0.0.2, 4420]: 16698 00:18:05.889 @path[10.0.0.2, 4420]: 17105 00:18:05.889 @path[10.0.0.2, 4420]: 16791 00:18:05.889 @path[10.0.0.2, 4420]: 15670 00:18:05.889 @path[10.0.0.2, 4420]: 15793 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88729 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:05.889 23:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:05.889 23:05:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:06.147 23:05:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:06.147 23:05:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88859 00:18:06.147 23:05:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:06.147 23:05:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:12.756 Attaching 4 probes... 00:18:12.756 @path[10.0.0.2, 4421]: 11915 00:18:12.756 @path[10.0.0.2, 4421]: 16891 00:18:12.756 @path[10.0.0.2, 4421]: 16785 00:18:12.756 @path[10.0.0.2, 4421]: 16749 00:18:12.756 @path[10.0.0.2, 4421]: 16806 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88859 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:12.756 23:05:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:12.756 23:05:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:13.322 23:05:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:13.322 23:05:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88994 00:18:13.322 23:05:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:13.322 23:05:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:19.879 Attaching 4 probes... 00:18:19.879 00:18:19.879 00:18:19.879 00:18:19.879 00:18:19.879 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88994 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:19.879 23:05:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:20.137 23:05:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:20.137 23:05:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89126 00:18:20.137 23:05:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:20.137 23:05:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:26.727 Attaching 4 probes... 00:18:26.727 @path[10.0.0.2, 4421]: 16430 00:18:26.727 @path[10.0.0.2, 4421]: 16502 00:18:26.727 @path[10.0.0.2, 4421]: 16714 00:18:26.727 @path[10.0.0.2, 4421]: 16690 00:18:26.727 @path[10.0.0.2, 4421]: 16410 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89126 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:26.727 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:26.727 [2024-05-14 23:05:38.810116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.727 [2024-05-14 23:05:38.810586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.810995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 [2024-05-14 23:05:38.811252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3840 is same with the state(5) to be set 00:18:26.728 23:05:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:27.664 23:05:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:27.664 23:05:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89257 00:18:27.664 23:05:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:27.664 23:05:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:34.238 23:05:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:34.238 23:05:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:34.238 Attaching 4 probes... 00:18:34.238 @path[10.0.0.2, 4420]: 15006 00:18:34.238 @path[10.0.0.2, 4420]: 16442 00:18:34.238 @path[10.0.0.2, 4420]: 16612 00:18:34.238 @path[10.0.0.2, 4420]: 16541 00:18:34.238 @path[10.0.0.2, 4420]: 16342 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89257 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:34.238 [2024-05-14 23:05:46.396426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:34.238 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:34.498 23:05:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:41.079 23:05:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:41.079 23:05:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=89454 00:18:41.079 23:05:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88421 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:41.079 23:05:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:46.359 23:05:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:46.359 23:05:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:46.945 Attaching 4 probes... 00:18:46.945 @path[10.0.0.2, 4421]: 15388 00:18:46.945 @path[10.0.0.2, 4421]: 13842 00:18:46.945 @path[10.0.0.2, 4421]: 15723 00:18:46.945 @path[10.0.0.2, 4421]: 14481 00:18:46.945 @path[10.0.0.2, 4421]: 16052 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 89454 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 88504 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 88504 ']' 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 88504 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88504 00:18:46.945 killing process with pid 88504 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88504' 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 88504 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 88504 00:18:46.945 Connection closed with partial response: 00:18:46.945 00:18:46.945 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 88504 00:18:46.945 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:46.945 [2024-05-14 23:05:01.457453] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:18:46.945 [2024-05-14 23:05:01.457566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88504 ] 00:18:46.945 [2024-05-14 23:05:01.609990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.945 [2024-05-14 23:05:01.694246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.945 Running I/O for 90 seconds... 00:18:46.945 [2024-05-14 23:05:11.684511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.945 [2024-05-14 23:05:11.684594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.684658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.946 [2024-05-14 23:05:11.684681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.684706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.684724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.684747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.684779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.684805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.684823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.684845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.684863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.684885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.684903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.684925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.684942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.684964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.684981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.685973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.685995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.686012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.686035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.946 [2024-05-14 23:05:11.686053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.946 [2024-05-14 23:05:11.686076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.686975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.686998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.687015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.687037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.687056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.687080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.687097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.687120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.687145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.687169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.687187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.687210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.687227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.687250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.687267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.687290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.687308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.687331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.687349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.688483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.947 [2024-05-14 23:05:11.688516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.688547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.947 [2024-05-14 23:05:11.688566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.688590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.947 [2024-05-14 23:05:11.688608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.688631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.947 [2024-05-14 23:05:11.688648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.688671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.947 [2024-05-14 23:05:11.688689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.947 [2024-05-14 23:05:11.688711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.947 [2024-05-14 23:05:11.688729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.688751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.688784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.688825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.688844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.688867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.688884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.688907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.688925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.688947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.688964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.688986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.689965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.689982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.690004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.690022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.690044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.690061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.690084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.690101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.690131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.690149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.690172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.690189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.690212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.690229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.948 [2024-05-14 23:05:11.690252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.948 [2024-05-14 23:05:11.690269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.690291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:11.690309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.690331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:11.690356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.690380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:11.690397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.690420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:11.690438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.690461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:11.690484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.691162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:11.691193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.691222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.949 [2024-05-14 23:05:11.691241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.691265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.949 [2024-05-14 23:05:11.691283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.691305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.949 [2024-05-14 23:05:11.691323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.691345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.949 [2024-05-14 23:05:11.691362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.691385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.949 [2024-05-14 23:05:11.691402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.691425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.949 [2024-05-14 23:05:11.691443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:11.691471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.949 [2024-05-14 23:05:11.691489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.218675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.949 [2024-05-14 23:05:18.218759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.218844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.218867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.218892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.218910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.218933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.218951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.218975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.218992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.949 [2024-05-14 23:05:18.219618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.949 [2024-05-14 23:05:18.219642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.219659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.219683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.219700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.219723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.219740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.219776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.219797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.219820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.219838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.219861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.219888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.219913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.219931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.219954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.219971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.219994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.950 [2024-05-14 23:05:18.220932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.950 [2024-05-14 23:05:18.220973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.220997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.950 [2024-05-14 23:05:18.221014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.221037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.950 [2024-05-14 23:05:18.221068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.221096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.950 [2024-05-14 23:05:18.221114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.221137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.950 [2024-05-14 23:05:18.221154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.221178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.950 [2024-05-14 23:05:18.221195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.221219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.950 [2024-05-14 23:05:18.221236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.221259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.950 [2024-05-14 23:05:18.221276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.950 [2024-05-14 23:05:18.221298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.221945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.221963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.225428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.951 [2024-05-14 23:05:18.225483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.225972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.225989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.226013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.226030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.951 [2024-05-14 23:05:18.226053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.951 [2024-05-14 23:05:18.226071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.226756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.226788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.952 [2024-05-14 23:05:18.227933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.952 [2024-05-14 23:05:18.227950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.227973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.227990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.228872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.228895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.953 [2024-05-14 23:05:18.229482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.953 [2024-05-14 23:05:18.229499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.229523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.229553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.229602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.229629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.229658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.229689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.229723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.229742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.229781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.229802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.229827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.229844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.229868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.229885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.229908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.229926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.229948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.229966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.229989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.230006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.230029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.230046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.230070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.230087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.230111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.230128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.231127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.231179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.231220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.231262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.231303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.231343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.231390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.954 [2024-05-14 23:05:18.231431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.231969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.231993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.232010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.232034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.232051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.954 [2024-05-14 23:05:18.232074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.954 [2024-05-14 23:05:18.232092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.955 [2024-05-14 23:05:18.232545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.232587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.232629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.232677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.232719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.232774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.232819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.232860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.232900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.232941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.232964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.232982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.233004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.233022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.233045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.233078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.233104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.233123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.233146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.233163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.233186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.233204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.233236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.955 [2024-05-14 23:05:18.233255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.955 [2024-05-14 23:05:18.233278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.233800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.233818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.234565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.234594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.234625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.234644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.234668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.234686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.234709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.234727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.234751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.234786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.234812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.234830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.234853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.234871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.234894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.234911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.234934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.234961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.234985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.956 [2024-05-14 23:05:18.235463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.956 [2024-05-14 23:05:18.235486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.235960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.235978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.236947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.236976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.237012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.237042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.237104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.237137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.237178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.237213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.237253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.237285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.237325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.957 [2024-05-14 23:05:18.237358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.957 [2024-05-14 23:05:18.237400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.237432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.237474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.237505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.237543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.237564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.237588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.237619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.237645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.237663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.237686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.237704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.237727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.237755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.237799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.237818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.238741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.238790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.238824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.238843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.238868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.238886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.238919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.238936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.238959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.238978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.239019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.239059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.239112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.958 [2024-05-14 23:05:18.239157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.239968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.958 [2024-05-14 23:05:18.239985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.958 [2024-05-14 23:05:18.240025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.959 [2024-05-14 23:05:18.240057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.959 [2024-05-14 23:05:18.240129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.959 [2024-05-14 23:05:18.240204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.959 [2024-05-14 23:05:18.240278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.959 [2024-05-14 23:05:18.240376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.959 [2024-05-14 23:05:18.240447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.959 [2024-05-14 23:05:18.240521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.240601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.240677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.240750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.240852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.240927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.240968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.241960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.241977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.242000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.242018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.242040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.242057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.959 [2024-05-14 23:05:18.242082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.959 [2024-05-14 23:05:18.242100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.242985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.243978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.243995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.244018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.244036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.244058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.244076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.244109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.244138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.244187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.244218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.244257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.960 [2024-05-14 23:05:18.244287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.960 [2024-05-14 23:05:18.244330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.244361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.244402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.244433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.244479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.244527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.244568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.244600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.244638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.244668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.244704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.244733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.244796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.244832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.244860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.244879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.244902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.244920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.244943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.244961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.244988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.245968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.245991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.246008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.246032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.246049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.246073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.961 [2024-05-14 23:05:18.246090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.961 [2024-05-14 23:05:18.246123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.246141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.247083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.247162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.247209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.247265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.247306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.247348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.247390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.247431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.247478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.962 [2024-05-14 23:05:18.247547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.247593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.247635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.247676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.247718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.247773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.247830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.247874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.247914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.247955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.247979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.247997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.962 [2024-05-14 23:05:18.248663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.962 [2024-05-14 23:05:18.248686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.248704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.248728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.248746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.248785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.248806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.248830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.248847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.248879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.248897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.248920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.248938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.248962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.248979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.249830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.249849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.250620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.250650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.250680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.250711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.250737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.250756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.250798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.250817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.250840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.250858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.250882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.250900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.250924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.250941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.250965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.963 [2024-05-14 23:05:18.250983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.963 [2024-05-14 23:05:18.251006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.251961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.251979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.964 [2024-05-14 23:05:18.252848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.964 [2024-05-14 23:05:18.252871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.252889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.252912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.252929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.252952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.252970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.252994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.253727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.253745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.254684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.254715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.254745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.254780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.254808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.254826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.254849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.254866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.254891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.254909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.254932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.254949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.254973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.254991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.965 [2024-05-14 23:05:18.255015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.965 [2024-05-14 23:05:18.255033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.966 [2024-05-14 23:05:18.255074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.966 [2024-05-14 23:05:18.255115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.966 [2024-05-14 23:05:18.255155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.255976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.255999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.256017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.256058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.256099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.256145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.256186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.256234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.966 [2024-05-14 23:05:18.256276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.966 [2024-05-14 23:05:18.256317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.966 [2024-05-14 23:05:18.256358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.966 [2024-05-14 23:05:18.256399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.966 [2024-05-14 23:05:18.256441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.966 [2024-05-14 23:05:18.256504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.966 [2024-05-14 23:05:18.256529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.256571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.256612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.256652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.256697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.256743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.256813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.256855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.256896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.256937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.256978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.256996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.257019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.257036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.257078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.257101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.257126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.257144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.257167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.257185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.257209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.257226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.257250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.257267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.257290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.257308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.257341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.257360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.257383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.257401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.257431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.257450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.967 [2024-05-14 23:05:18.258799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.967 [2024-05-14 23:05:18.258822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.258841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.258863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.258881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.258904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.258921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.258945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.258962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.258985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.259963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.259982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.260005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.260023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.260045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.260063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.260086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.260103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.260126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.260144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.968 [2024-05-14 23:05:18.260166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.968 [2024-05-14 23:05:18.260184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.260939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.260957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.261855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.261889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.261919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.261940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.261964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.261982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.262022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.262063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.262103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.262155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.262199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.262240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.262281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.262321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.969 [2024-05-14 23:05:18.262361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.969 [2024-05-14 23:05:18.262402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.969 [2024-05-14 23:05:18.262426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.969 [2024-05-14 23:05:18.262444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.262965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.262988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.970 [2024-05-14 23:05:18.263539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.970 [2024-05-14 23:05:18.263581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.970 [2024-05-14 23:05:18.263621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.970 [2024-05-14 23:05:18.263662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.970 [2024-05-14 23:05:18.263703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.970 [2024-05-14 23:05:18.263744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.970 [2024-05-14 23:05:18.263817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.970 [2024-05-14 23:05:18.263841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.263859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.263882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.263900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.263923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.263941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.263964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.263981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.264609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.264627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.265976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.265994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.266018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.266035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.266058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.266076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.266098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.266116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.971 [2024-05-14 23:05:18.266138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.971 [2024-05-14 23:05:18.266156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.266958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.266985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.267938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.267971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.268013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.268046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.268089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.268135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.268177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.268207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.268244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.268272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.268308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.268338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.268373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.268404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.268444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.268474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.268512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.268543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.268583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.268612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.972 [2024-05-14 23:05:18.268653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.972 [2024-05-14 23:05:18.268685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.268726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.268785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.268835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.268869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.268914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.268947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.268990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.269112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.269183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.269251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.269322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.269390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.269456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.269521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.269591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.269661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.269731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.269780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.270786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.270821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.270856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.270876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.270915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.270935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.270959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.270977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.271018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.271059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.271100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.271140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.271181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.271224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.271266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.271307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.973 [2024-05-14 23:05:18.271354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.973 [2024-05-14 23:05:18.271919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.973 [2024-05-14 23:05:18.271937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.271960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.271978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.974 [2024-05-14 23:05:18.272555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.272599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.272640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.272681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.272723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.272779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.272824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.272865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.272905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.272946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.272969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.272987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.974 [2024-05-14 23:05:18.273587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.974 [2024-05-14 23:05:18.273617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.273635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.274961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.274984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.975 [2024-05-14 23:05:18.275656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.975 [2024-05-14 23:05:18.275674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.275697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.275715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.275738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.275756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.275796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.275814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.275837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.275855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.275879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.275896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.275920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.275937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.275960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.275978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.276961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.276979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.277002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.277019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.277043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.277081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.277108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.277127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.277150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.277176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.277202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.277219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.277244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.277262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.277670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.277720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.277830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.277867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.277910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.277940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.277986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.278021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.278067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.278109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.278153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.976 [2024-05-14 23:05:18.278186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.976 [2024-05-14 23:05:18.278231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.278254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.278311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.278383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.278461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.278516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.278561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.278607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.278652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.278697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.278743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.278808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.278855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.278901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.278946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.278973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.278991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.977 [2024-05-14 23:05:18.279929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.279956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.279974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.280001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.280019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.280046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.280064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.280091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.280109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.280137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.280155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.280182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.280200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.280236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.280255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.977 [2024-05-14 23:05:18.280282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.977 [2024-05-14 23:05:18.280300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.280961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.280979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.281007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.281025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:18.281220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.978 [2024-05-14 23:05:18.281246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.416615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.416684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.416746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.416783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.416820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.416838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.416860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.416877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.416899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.416916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.416938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.416955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.416977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.978 [2024-05-14 23:05:25.417755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.978 [2024-05-14 23:05:25.417799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.417820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.417844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.417871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.417895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.417912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.417937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.417954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.417977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.417994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.418873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.418890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.420416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.979 [2024-05-14 23:05:25.420441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.420471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.420489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.420516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.420533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.420559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.420576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.420604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.420631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.420657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.420674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.420700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.420718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.420745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.420784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.420936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.420962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.420995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.421014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.421043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.421060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.421105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.421124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.421154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.421171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.421204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.421221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.979 [2024-05-14 23:05:25.421250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.979 [2024-05-14 23:05:25.421267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.980 [2024-05-14 23:05:25.421314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.421980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.421998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:25.422279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:25.422301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.810488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.980 [2024-05-14 23:05:38.810543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.810629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.980 [2024-05-14 23:05:38.810661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.810688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.810705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.810728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.810746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.810783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.810803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.810825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.810843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.810865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.810882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.810905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.810921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.810943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.810960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.810982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.810999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.811021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.811038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.811060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.811076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.811099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.811115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.811150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.811167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.980 [2024-05-14 23:05:38.811190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.980 [2024-05-14 23:05:38.811207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.811974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.811996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.812973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.812995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.813011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.813033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.813050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.813072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.981 [2024-05-14 23:05:38.813088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.981 [2024-05-14 23:05:38.813125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.813150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.813174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.813191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.813214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.813231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.813367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.982 [2024-05-14 23:05:38.813396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.813413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.982 [2024-05-14 23:05:38.813428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.813449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.982 [2024-05-14 23:05:38.813463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.813478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.982 [2024-05-14 23:05:38.813492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.813507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.813522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.813549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa39f0 is same with the state(5) to be set 00:18:46.982 [2024-05-14 23:05:38.813981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.814008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.814040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.814071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.814100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.814153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.814188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.814217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.814247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.814277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.982 [2024-05-14 23:05:38.814307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.814979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.814994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.815008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.815024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.815038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.815053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.815067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.815083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.815097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.815112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.815126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.815142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.815156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.815171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.815185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.815200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.982 [2024-05-14 23:05:38.815214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.982 [2024-05-14 23:05:38.815229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.815776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.815810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.815839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.815870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.815899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.815929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.815958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.815975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.815998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.816035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.983 [2024-05-14 23:05:38.816064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.816093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.816123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.816160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.816190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.816220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.816248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.816278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.816320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.816335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.831146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.831221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.831291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.831356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.831421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.831485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.831589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.831654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.831719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.983 [2024-05-14 23:05:38.831814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.983 [2024-05-14 23:05:38.831847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.831882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.831912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.831946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.831976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.832940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.832974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.833962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.833992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.834026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.834057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.834090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.834129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.834189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.984 [2024-05-14 23:05:38.834248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.984 [2024-05-14 23:05:38.834388] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834458] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834501] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834532] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834584] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834627] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834660] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834691] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834721] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834752] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834855] bdev_nvme.c:7795:bdev_nvme_readv: *ERROR*: readv failed: rc = -6 00:18:46.984 [2024-05-14 23:05:38.834973] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfd1310 was disconnected and freed. reset controller. 00:18:46.984 [2024-05-14 23:05:38.835136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa39f0 (9): Bad file descriptor 00:18:46.984 [2024-05-14 23:05:38.835219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.984 [2024-05-14 23:05:38.835421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.984 [2024-05-14 23:05:38.835550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.984 [2024-05-14 23:05:38.835597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa39f0 with addr=10.0.0.2, port=4421 00:18:46.984 [2024-05-14 23:05:38.835630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa39f0 is same with the state(5) to be set 00:18:46.984 [2024-05-14 23:05:38.838311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa39f0 (9): Bad file descriptor 00:18:46.984 [2024-05-14 23:05:38.838929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:46.985 [2024-05-14 23:05:38.838989] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:46.985 [2024-05-14 23:05:38.839022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.985 [2024-05-14 23:05:38.839120] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.985 [2024-05-14 23:05:38.839158] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.985 [2024-05-14 23:05:48.931524] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.985 Received shutdown signal, test time was about 55.579230 seconds 00:18:46.985 00:18:46.985 Latency(us) 00:18:46.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.985 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:46.985 Verification LBA range: start 0x0 length 0x4000 00:18:46.985 Nvme0n1 : 55.58 6997.52 27.33 0.00 0.00 18259.79 357.47 7046430.72 00:18:46.985 =================================================================================================================== 00:18:46.985 Total : 6997.52 27.33 0.00 0.00 18259.79 357.47 7046430.72 00:18:46.985 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.244 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:47.244 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:47.244 23:05:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:47.244 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:47.244 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:47.244 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:47.244 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:47.244 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:47.244 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:47.244 rmmod nvme_tcp 00:18:47.503 rmmod nvme_fabrics 00:18:47.503 rmmod nvme_keyring 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 88421 ']' 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 88421 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 88421 ']' 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 88421 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88421 00:18:47.503 killing process with pid 88421 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88421' 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 88421 00:18:47.503 [2024-05-14 23:05:59.699363] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:47.503 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 88421 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:47.762 00:18:47.762 real 1m0.954s 00:18:47.762 user 2m53.732s 00:18:47.762 sys 0m13.305s 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:47.762 23:05:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:47.762 ************************************ 00:18:47.762 END TEST nvmf_host_multipath 00:18:47.762 ************************************ 00:18:47.762 23:05:59 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:47.762 23:05:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:47.762 23:05:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:47.762 23:05:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.762 ************************************ 00:18:47.762 START TEST nvmf_timeout 00:18:47.762 ************************************ 00:18:47.762 23:05:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:47.762 * Looking for test storage... 00:18:47.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:47.763 Cannot find device "nvmf_tgt_br" 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:47.763 Cannot find device "nvmf_tgt_br2" 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:47.763 Cannot find device "nvmf_tgt_br" 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:47.763 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:48.021 Cannot find device "nvmf_tgt_br2" 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:48.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:18:48.021 00:18:48.021 --- 10.0.0.2 ping statistics --- 00:18:48.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.021 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:48.021 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:48.021 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:48.021 00:18:48.021 --- 10.0.0.3 ping statistics --- 00:18:48.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.021 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:48.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:48.021 00:18:48.021 --- 10.0.0.1 ping statistics --- 00:18:48.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.021 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.021 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=89770 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 89770 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 89770 ']' 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:48.278 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:48.278 [2024-05-14 23:06:00.482300] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:18:48.278 [2024-05-14 23:06:00.482384] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.278 [2024-05-14 23:06:00.622752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:48.537 [2024-05-14 23:06:00.682754] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.537 [2024-05-14 23:06:00.682823] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.537 [2024-05-14 23:06:00.682842] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.537 [2024-05-14 23:06:00.682856] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.537 [2024-05-14 23:06:00.682868] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.537 [2024-05-14 23:06:00.682971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.537 [2024-05-14 23:06:00.682990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.537 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:48.537 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:18:48.537 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.537 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.537 23:06:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:48.537 23:06:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.537 23:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.537 23:06:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:48.795 [2024-05-14 23:06:01.063928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.795 23:06:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:49.079 Malloc0 00:18:49.079 23:06:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:49.337 23:06:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:49.594 23:06:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.851 [2024-05-14 23:06:02.066739] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:49.851 [2024-05-14 23:06:02.067023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.851 23:06:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=89853 00:18:49.851 23:06:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:49.851 23:06:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 89853 /var/tmp/bdevperf.sock 00:18:49.852 23:06:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 89853 ']' 00:18:49.852 23:06:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.852 23:06:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:49.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.852 23:06:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.852 23:06:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:49.852 23:06:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:49.852 [2024-05-14 23:06:02.142945] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:18:49.852 [2024-05-14 23:06:02.143043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89853 ] 00:18:50.109 [2024-05-14 23:06:02.280418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.109 [2024-05-14 23:06:02.359757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.045 23:06:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:51.045 23:06:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:18:51.045 23:06:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:51.303 23:06:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:51.561 NVMe0n1 00:18:51.561 23:06:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:51.561 23:06:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=89901 00:18:51.561 23:06:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:51.561 Running I/O for 10 seconds... 00:18:52.497 23:06:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.759 [2024-05-14 23:06:05.066279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.759 [2024-05-14 23:06:05.066760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.066878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1652b50 is same with the state(5) to be set 00:18:52.760 [2024-05-14 23:06:05.067260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.067979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.067991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.068002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.068012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.068023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.068033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.068045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.068054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.068067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.068083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.760 [2024-05-14 23:06:05.068104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.760 [2024-05-14 23:06:05.068122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.761 [2024-05-14 23:06:05.068927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.761 [2024-05-14 23:06:05.068956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.068976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.761 [2024-05-14 23:06:05.068994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.069013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.761 [2024-05-14 23:06:05.069024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.069036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.761 [2024-05-14 23:06:05.069045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.069057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.761 [2024-05-14 23:06:05.069067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.069079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.761 [2024-05-14 23:06:05.069088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.761 [2024-05-14 23:06:05.069099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.069987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.762 [2024-05-14 23:06:05.069996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.070008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.762 [2024-05-14 23:06:05.070024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.070043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.762 [2024-05-14 23:06:05.070061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.070080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.762 [2024-05-14 23:06:05.070093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.070106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.762 [2024-05-14 23:06:05.070116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.762 [2024-05-14 23:06:05.070127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.762 [2024-05-14 23:06:05.070136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.763 [2024-05-14 23:06:05.070608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9007b0 is same with the state(5) to be set 00:18:52.763 [2024-05-14 23:06:05.070649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.763 [2024-05-14 23:06:05.070663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.763 [2024-05-14 23:06:05.070678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80792 len:8 PRP1 0x0 PRP2 0x0 00:18:52.763 [2024-05-14 23:06:05.070689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.763 [2024-05-14 23:06:05.070733] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9007b0 was disconnected and freed. reset controller. 00:18:52.763 [2024-05-14 23:06:05.071036] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.763 [2024-05-14 23:06:05.071133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x891a00 (9): Bad file descriptor 00:18:52.763 [2024-05-14 23:06:05.071268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.763 [2024-05-14 23:06:05.071338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:52.763 [2024-05-14 23:06:05.071362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x891a00 with addr=10.0.0.2, port=4420 00:18:52.763 [2024-05-14 23:06:05.071374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a00 is same with the state(5) to be set 00:18:52.763 [2024-05-14 23:06:05.071395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x891a00 (9): Bad file descriptor 00:18:52.763 [2024-05-14 23:06:05.071412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:52.763 [2024-05-14 23:06:05.071425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:52.763 [2024-05-14 23:06:05.071442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:52.763 [2024-05-14 23:06:05.071471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:52.763 [2024-05-14 23:06:05.071485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.763 23:06:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:55.299 [2024-05-14 23:06:07.071633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:55.299 [2024-05-14 23:06:07.071758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:55.299 [2024-05-14 23:06:07.071796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x891a00 with addr=10.0.0.2, port=4420 00:18:55.299 [2024-05-14 23:06:07.071810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a00 is same with the state(5) to be set 00:18:55.299 [2024-05-14 23:06:07.071840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x891a00 (9): Bad file descriptor 00:18:55.299 [2024-05-14 23:06:07.071882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:55.299 [2024-05-14 23:06:07.071894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:55.299 [2024-05-14 23:06:07.071905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:55.299 [2024-05-14 23:06:07.071940] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:55.299 [2024-05-14 23:06:07.071961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:55.299 23:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:55.299 23:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:55.299 23:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:55.299 23:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:55.299 23:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:55.299 23:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:55.299 23:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:55.557 23:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:55.557 23:06:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:56.936 [2024-05-14 23:06:09.072112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.936 [2024-05-14 23:06:09.072222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.936 [2024-05-14 23:06:09.072242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x891a00 with addr=10.0.0.2, port=4420 00:18:56.936 [2024-05-14 23:06:09.072256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a00 is same with the state(5) to be set 00:18:56.936 [2024-05-14 23:06:09.072284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x891a00 (9): Bad file descriptor 00:18:56.936 [2024-05-14 23:06:09.072305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:56.936 [2024-05-14 23:06:09.072315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:56.936 [2024-05-14 23:06:09.072326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:56.936 [2024-05-14 23:06:09.072354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.936 [2024-05-14 23:06:09.072366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.836 [2024-05-14 23:06:11.072415] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:59.772 00:18:59.772 Latency(us) 00:18:59.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.772 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:59.772 Verification LBA range: start 0x0 length 0x4000 00:18:59.772 NVMe0n1 : 8.15 1227.85 4.80 15.70 0.00 102745.04 2189.50 7015926.69 00:18:59.772 =================================================================================================================== 00:18:59.772 Total : 1227.85 4.80 15.70 0.00 102745.04 2189.50 7015926.69 00:18:59.772 0 00:19:00.708 23:06:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:00.708 23:06:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:00.708 23:06:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 89901 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 89853 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 89853 ']' 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 89853 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:00.967 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89853 00:19:01.234 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:01.234 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:01.234 killing process with pid 89853 00:19:01.234 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89853' 00:19:01.235 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 89853 00:19:01.235 Received shutdown signal, test time was about 9.460719 seconds 00:19:01.235 00:19:01.235 Latency(us) 00:19:01.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.235 =================================================================================================================== 00:19:01.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.235 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 89853 00:19:01.235 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.495 [2024-05-14 23:06:13.771407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.495 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=90053 00:19:01.495 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:01.495 23:06:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 90053 /var/tmp/bdevperf.sock 00:19:01.495 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 90053 ']' 00:19:01.495 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.495 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:01.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.495 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.495 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:01.495 23:06:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:01.495 [2024-05-14 23:06:13.837587] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:01.495 [2024-05-14 23:06:13.837665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90053 ] 00:19:01.760 [2024-05-14 23:06:13.968441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.760 [2024-05-14 23:06:14.030894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.710 23:06:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:02.710 23:06:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:19:02.710 23:06:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:02.968 23:06:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:03.226 NVMe0n1 00:19:03.226 23:06:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=90106 00:19:03.226 23:06:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:03.226 23:06:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:03.484 Running I/O for 10 seconds... 00:19:04.421 23:06:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.687 [2024-05-14 23:06:16.869743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.869995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184e6a0 is same with the state(5) to be set 00:19:04.687 [2024-05-14 23:06:16.870309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.687 [2024-05-14 23:06:16.870338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.687 [2024-05-14 23:06:16.870372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.687 [2024-05-14 23:06:16.870394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.687 [2024-05-14 23:06:16.870416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.687 [2024-05-14 23:06:16.870440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.687 [2024-05-14 23:06:16.870461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.687 [2024-05-14 23:06:16.870481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.687 [2024-05-14 23:06:16.870502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.687 [2024-05-14 23:06:16.870523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.687 [2024-05-14 23:06:16.870543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.687 [2024-05-14 23:06:16.870563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.687 [2024-05-14 23:06:16.870584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.687 [2024-05-14 23:06:16.870595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.687 [2024-05-14 23:06:16.870604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.870989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.870998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.688 [2024-05-14 23:06:16.871326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.688 [2024-05-14 23:06:16.871336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.689 [2024-05-14 23:06:16.871986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.689 [2024-05-14 23:06:16.871998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.690 [2024-05-14 23:06:16.872235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.690 [2024-05-14 23:06:16.872603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.690 [2024-05-14 23:06:16.872612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.691 [2024-05-14 23:06:16.872870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.691 [2024-05-14 23:06:16.872890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.691 [2024-05-14 23:06:16.872910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.691 [2024-05-14 23:06:16.872931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.691 [2024-05-14 23:06:16.872952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.691 [2024-05-14 23:06:16.872973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.872984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.691 [2024-05-14 23:06:16.872993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.873004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.691 [2024-05-14 23:06:16.873013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.873024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.691 [2024-05-14 23:06:16.873033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.873044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147f690 is same with the state(5) to be set 00:19:04.691 [2024-05-14 23:06:16.873058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.691 [2024-05-14 23:06:16.873066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.691 [2024-05-14 23:06:16.873075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81248 len:8 PRP1 0x0 PRP2 0x0 00:19:04.691 [2024-05-14 23:06:16.873084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.691 [2024-05-14 23:06:16.873136] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x147f690 was disconnected and freed. reset controller. 00:19:04.691 [2024-05-14 23:06:16.873389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.691 [2024-05-14 23:06:16.873466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410a00 (9): Bad file descriptor 00:19:04.691 [2024-05-14 23:06:16.873565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.691 [2024-05-14 23:06:16.873620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.691 [2024-05-14 23:06:16.873637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410a00 with addr=10.0.0.2, port=4420 00:19:04.691 [2024-05-14 23:06:16.873648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410a00 is same with the state(5) to be set 00:19:04.691 [2024-05-14 23:06:16.873666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410a00 (9): Bad file descriptor 00:19:04.691 [2024-05-14 23:06:16.873682] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.691 [2024-05-14 23:06:16.873692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:04.691 [2024-05-14 23:06:16.873703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.691 [2024-05-14 23:06:16.873723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.691 [2024-05-14 23:06:16.873735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.691 23:06:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:05.650 [2024-05-14 23:06:17.873882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.650 [2024-05-14 23:06:17.873980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.650 [2024-05-14 23:06:17.874000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410a00 with addr=10.0.0.2, port=4420 00:19:05.650 [2024-05-14 23:06:17.874015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410a00 is same with the state(5) to be set 00:19:05.650 [2024-05-14 23:06:17.874043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410a00 (9): Bad file descriptor 00:19:05.650 [2024-05-14 23:06:17.874063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:05.650 [2024-05-14 23:06:17.874074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:05.650 [2024-05-14 23:06:17.874084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:05.650 [2024-05-14 23:06:17.874111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:05.650 [2024-05-14 23:06:17.874124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:05.650 23:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.909 [2024-05-14 23:06:18.159402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.909 23:06:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 90106 00:19:06.857 [2024-05-14 23:06:18.891595] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:13.464 00:19:13.464 Latency(us) 00:19:13.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.464 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:13.464 Verification LBA range: start 0x0 length 0x4000 00:19:13.464 NVMe0n1 : 10.01 6130.13 23.95 0.00 0.00 20839.34 2115.03 3019898.88 00:19:13.464 =================================================================================================================== 00:19:13.464 Total : 6130.13 23.95 0.00 0.00 20839.34 2115.03 3019898.88 00:19:13.464 0 00:19:13.464 23:06:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=90223 00:19:13.464 23:06:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:13.464 23:06:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:13.464 Running I/O for 10 seconds... 00:19:14.398 23:06:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.658 [2024-05-14 23:06:26.954574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.658 [2024-05-14 23:06:26.954906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.954914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.954923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.954932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.954942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.954951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.954959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.954967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.954976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.954996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f780 is same with the state(5) to be set 00:19:14.659 [2024-05-14 23:06:26.955440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.659 [2024-05-14 23:06:26.955929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.659 [2024-05-14 23:06:26.955939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.955950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.955959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.955970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.955979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.955990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.955999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.660 [2024-05-14 23:06:26.956711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.660 [2024-05-14 23:06:26.956721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.661 [2024-05-14 23:06:26.956984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.956995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.661 [2024-05-14 23:06:26.957358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.661 [2024-05-14 23:06:26.957370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.662 [2024-05-14 23:06:26.957942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.662 [2024-05-14 23:06:26.957951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.957962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.663 [2024-05-14 23:06:26.957971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.957983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.663 [2024-05-14 23:06:26.957992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.663 [2024-05-14 23:06:26.958012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.663 [2024-05-14 23:06:26.958032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.663 [2024-05-14 23:06:26.958052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.663 [2024-05-14 23:06:26.958074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.663 [2024-05-14 23:06:26.958095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.663 [2024-05-14 23:06:26.958115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.663 [2024-05-14 23:06:26.958136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.663 [2024-05-14 23:06:26.958156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.663 [2024-05-14 23:06:26.958176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148f4f0 is same with the state(5) to be set 00:19:14.663 [2024-05-14 23:06:26.958202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.663 [2024-05-14 23:06:26.958209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.663 [2024-05-14 23:06:26.958218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83000 len:8 PRP1 0x0 PRP2 0x0 00:19:14.663 [2024-05-14 23:06:26.958227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958269] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x148f4f0 was disconnected and freed. reset controller. 00:19:14.663 [2024-05-14 23:06:26.958355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.663 [2024-05-14 23:06:26.958383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.663 [2024-05-14 23:06:26.958405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.663 [2024-05-14 23:06:26.958424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.663 [2024-05-14 23:06:26.958444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.663 [2024-05-14 23:06:26.958452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410a00 is same with the state(5) to be set 00:19:14.663 [2024-05-14 23:06:26.958673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:14.663 [2024-05-14 23:06:26.958704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410a00 (9): Bad file descriptor 00:19:14.663 [2024-05-14 23:06:26.958815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.663 [2024-05-14 23:06:26.958877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.663 [2024-05-14 23:06:26.958899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410a00 with addr=10.0.0.2, port=4420 00:19:14.663 [2024-05-14 23:06:26.958914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410a00 is same with the state(5) to be set 00:19:14.663 [2024-05-14 23:06:26.958933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410a00 (9): Bad file descriptor 00:19:14.663 [2024-05-14 23:06:26.958951] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.663 [2024-05-14 23:06:26.958961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:14.663 [2024-05-14 23:06:26.958971] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.663 [2024-05-14 23:06:26.972422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:14.663 [2024-05-14 23:06:26.972477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:14.663 23:06:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:15.597 [2024-05-14 23:06:27.972887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:15.597 [2024-05-14 23:06:27.973153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:15.597 [2024-05-14 23:06:27.973209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410a00 with addr=10.0.0.2, port=4420 00:19:15.597 [2024-05-14 23:06:27.973240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410a00 is same with the state(5) to be set 00:19:15.597 [2024-05-14 23:06:27.973323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410a00 (9): Bad file descriptor 00:19:15.597 [2024-05-14 23:06:27.973387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:15.597 [2024-05-14 23:06:27.973404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:15.597 [2024-05-14 23:06:27.973431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.597 [2024-05-14 23:06:27.973513] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:15.597 [2024-05-14 23:06:27.973544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:16.970 [2024-05-14 23:06:28.973714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.970 [2024-05-14 23:06:28.973838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.970 [2024-05-14 23:06:28.973872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410a00 with addr=10.0.0.2, port=4420 00:19:16.970 [2024-05-14 23:06:28.973897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410a00 is same with the state(5) to be set 00:19:16.970 [2024-05-14 23:06:28.973940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410a00 (9): Bad file descriptor 00:19:16.970 [2024-05-14 23:06:28.973992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:16.970 [2024-05-14 23:06:28.974013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:16.970 [2024-05-14 23:06:28.974041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:16.970 [2024-05-14 23:06:28.974072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:16.970 [2024-05-14 23:06:28.974086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.906 [2024-05-14 23:06:29.974489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.906 [2024-05-14 23:06:29.974605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.906 [2024-05-14 23:06:29.974626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1410a00 with addr=10.0.0.2, port=4420 00:19:17.906 [2024-05-14 23:06:29.974640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410a00 is same with the state(5) to be set 00:19:17.906 [2024-05-14 23:06:29.974935] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1410a00 (9): Bad file descriptor 00:19:17.906 [2024-05-14 23:06:29.975213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:17.906 [2024-05-14 23:06:29.975240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:17.906 [2024-05-14 23:06:29.975253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:17.906 [2024-05-14 23:06:29.979343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:17.906 [2024-05-14 23:06:29.979380] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.906 23:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.906 [2024-05-14 23:06:30.278547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.164 23:06:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 90223 00:19:18.730 [2024-05-14 23:06:31.013782] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:24.007 00:19:24.007 Latency(us) 00:19:24.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.007 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:24.007 Verification LBA range: start 0x0 length 0x4000 00:19:24.007 NVMe0n1 : 10.01 5125.93 20.02 3446.93 0.00 14896.58 640.47 3019898.88 00:19:24.007 =================================================================================================================== 00:19:24.007 Total : 5125.93 20.02 3446.93 0.00 14896.58 0.00 3019898.88 00:19:24.007 0 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 90053 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 90053 ']' 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 90053 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90053 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:24.007 killing process with pid 90053 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90053' 00:19:24.007 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.007 00:19:24.007 Latency(us) 00:19:24.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.007 =================================================================================================================== 00:19:24.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 90053 00:19:24.007 23:06:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 90053 00:19:24.007 23:06:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=90344 00:19:24.007 23:06:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:24.007 23:06:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 90344 /var/tmp/bdevperf.sock 00:19:24.007 23:06:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 90344 ']' 00:19:24.007 23:06:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.007 23:06:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:24.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.007 23:06:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.007 23:06:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:24.007 23:06:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:24.007 [2024-05-14 23:06:36.082123] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:24.007 [2024-05-14 23:06:36.082230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90344 ] 00:19:24.007 [2024-05-14 23:06:36.218159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.007 [2024-05-14 23:06:36.280693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.939 23:06:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:24.939 23:06:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:19:24.939 23:06:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=90372 00:19:24.939 23:06:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90344 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:24.939 23:06:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:25.196 23:06:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:25.454 NVMe0n1 00:19:25.454 23:06:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=90431 00:19:25.454 23:06:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:25.454 23:06:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:25.712 Running I/O for 10 seconds... 00:19:26.666 23:06:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.927 [2024-05-14 23:06:39.065921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.065985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.065998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.927 [2024-05-14 23:06:39.066114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.928 [2024-05-14 23:06:39.066817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.066939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851b40 is same with the state(5) to be set 00:19:26.929 [2024-05-14 23:06:39.067149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.929 [2024-05-14 23:06:39.067868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.929 [2024-05-14 23:06:39.067880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.067890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.067902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.067912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.067924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.067934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.067946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.067956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.067968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.067978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.067989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.067999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.930 [2024-05-14 23:06:39.068696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.930 [2024-05-14 23:06:39.068709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.068981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.068993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.931 [2024-05-14 23:06:39.069565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.931 [2024-05-14 23:06:39.069575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.069987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.069998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.070008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.070020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.932 [2024-05-14 23:06:39.070030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.070041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11317b0 is same with the state(5) to be set 00:19:26.932 [2024-05-14 23:06:39.070054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.932 [2024-05-14 23:06:39.070062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.932 [2024-05-14 23:06:39.070071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84824 len:8 PRP1 0x0 PRP2 0x0 00:19:26.932 [2024-05-14 23:06:39.070081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.070125] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11317b0 was disconnected and freed. reset controller. 00:19:26.932 [2024-05-14 23:06:39.070211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.932 [2024-05-14 23:06:39.070239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.070253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.932 [2024-05-14 23:06:39.070263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.070273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.932 [2024-05-14 23:06:39.070283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.070293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:26.932 [2024-05-14 23:06:39.070302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.932 [2024-05-14 23:06:39.070311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c2a00 is same with the state(5) to be set 00:19:26.932 [2024-05-14 23:06:39.070586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.932 [2024-05-14 23:06:39.070616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c2a00 (9): Bad file descriptor 00:19:26.932 [2024-05-14 23:06:39.070724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.932 [2024-05-14 23:06:39.070792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.932 [2024-05-14 23:06:39.070812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c2a00 with addr=10.0.0.2, port=4420 00:19:26.932 [2024-05-14 23:06:39.070823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c2a00 is same with the state(5) to be set 00:19:26.932 [2024-05-14 23:06:39.070843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c2a00 (9): Bad file descriptor 00:19:26.933 [2024-05-14 23:06:39.070860] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.933 [2024-05-14 23:06:39.070870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:26.933 [2024-05-14 23:06:39.070880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.933 [2024-05-14 23:06:39.080691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.933 [2024-05-14 23:06:39.080730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.933 23:06:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 90431 00:19:28.833 [2024-05-14 23:06:41.080940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.833 [2024-05-14 23:06:41.081053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.833 [2024-05-14 23:06:41.081072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c2a00 with addr=10.0.0.2, port=4420 00:19:28.833 [2024-05-14 23:06:41.081087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c2a00 is same with the state(5) to be set 00:19:28.833 [2024-05-14 23:06:41.081114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c2a00 (9): Bad file descriptor 00:19:28.833 [2024-05-14 23:06:41.081134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:28.833 [2024-05-14 23:06:41.081145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:28.833 [2024-05-14 23:06:41.081156] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.833 [2024-05-14 23:06:41.081185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:28.833 [2024-05-14 23:06:41.081197] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.732 [2024-05-14 23:06:43.081391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.732 [2024-05-14 23:06:43.081491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.732 [2024-05-14 23:06:43.081512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c2a00 with addr=10.0.0.2, port=4420 00:19:30.732 [2024-05-14 23:06:43.081526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c2a00 is same with the state(5) to be set 00:19:30.732 [2024-05-14 23:06:43.081553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c2a00 (9): Bad file descriptor 00:19:30.732 [2024-05-14 23:06:43.081574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.732 [2024-05-14 23:06:43.081584] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:30.732 [2024-05-14 23:06:43.081594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.732 [2024-05-14 23:06:43.081622] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.732 [2024-05-14 23:06:43.081634] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.268 [2024-05-14 23:06:45.081711] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:33.836 00:19:33.836 Latency(us) 00:19:33.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.836 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:33.836 NVMe0n1 : 8.19 2556.07 9.98 15.63 0.00 49694.58 2561.86 7015926.69 00:19:33.836 =================================================================================================================== 00:19:33.836 Total : 2556.07 9.98 15.63 0.00 49694.58 2561.86 7015926.69 00:19:33.836 0 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:33.836 Attaching 5 probes... 00:19:33.836 1425.854001: reset bdev controller NVMe0 00:19:33.836 1425.933107: reconnect bdev controller NVMe0 00:19:33.836 3436.069607: reconnect delay bdev controller NVMe0 00:19:33.836 3436.093708: reconnect bdev controller NVMe0 00:19:33.836 5436.524033: reconnect delay bdev controller NVMe0 00:19:33.836 5436.548454: reconnect bdev controller NVMe0 00:19:33.836 7436.947513: reconnect delay bdev controller NVMe0 00:19:33.836 7436.970302: reconnect bdev controller NVMe0 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 90372 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 90344 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 90344 ']' 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 90344 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90344 00:19:33.836 killing process with pid 90344 00:19:33.836 Received shutdown signal, test time was about 8.247481 seconds 00:19:33.836 00:19:33.836 Latency(us) 00:19:33.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.836 =================================================================================================================== 00:19:33.836 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90344' 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 90344 00:19:33.836 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 90344 00:19:34.094 23:06:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:34.353 23:06:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:34.353 23:06:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:34.353 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:34.353 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:34.611 rmmod nvme_tcp 00:19:34.611 rmmod nvme_fabrics 00:19:34.611 rmmod nvme_keyring 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 89770 ']' 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 89770 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 89770 ']' 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 89770 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:19:34.611 23:06:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:34.611 23:06:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89770 00:19:34.870 killing process with pid 89770 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89770' 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 89770 00:19:34.870 [2024-05-14 23:06:47.020866] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 89770 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:34.870 00:19:34.870 real 0m47.274s 00:19:34.870 user 2m20.703s 00:19:34.870 sys 0m4.708s 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:34.870 ************************************ 00:19:34.870 END TEST nvmf_timeout 00:19:34.870 23:06:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:34.870 ************************************ 00:19:35.129 23:06:47 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ virt == phy ]] 00:19:35.129 23:06:47 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:19:35.129 23:06:47 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:35.129 23:06:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.129 23:06:47 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:19:35.129 00:19:35.129 real 12m35.606s 00:19:35.129 user 35m14.317s 00:19:35.129 sys 2m55.250s 00:19:35.129 ************************************ 00:19:35.129 END TEST nvmf_tcp 00:19:35.129 ************************************ 00:19:35.129 23:06:47 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:35.129 23:06:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.129 23:06:47 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:19:35.129 23:06:47 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:19:35.129 23:06:47 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:35.129 23:06:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:35.129 23:06:47 -- common/autotest_common.sh@10 -- # set +x 00:19:35.130 ************************************ 00:19:35.130 START TEST spdkcli_nvmf_tcp 00:19:35.130 ************************************ 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:19:35.130 * Looking for test storage... 00:19:35.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=90654 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 90654 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 90654 ']' 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:35.130 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.388 [2024-05-14 23:06:47.544288] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:35.388 [2024-05-14 23:06:47.544382] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90654 ] 00:19:35.388 [2024-05-14 23:06:47.676223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:35.388 [2024-05-14 23:06:47.736918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.388 [2024-05-14 23:06:47.736927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.646 23:06:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:35.646 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:35.646 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:19:35.646 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:19:35.646 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:19:35.646 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:19:35.646 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:19:35.646 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:19:35.646 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:19:35.646 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:19:35.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:19:35.646 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:19:35.646 ' 00:19:38.175 [2024-05-14 23:06:50.568277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.550 [2024-05-14 23:06:51.857203] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:39.550 [2024-05-14 23:06:51.857796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:19:42.075 [2024-05-14 23:06:54.215143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:19:43.973 [2024-05-14 23:06:56.248596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:19:45.874 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:19:45.874 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:19:45.874 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:19:45.874 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:19:45.874 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:19:45.874 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:19:45.874 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:19:45.874 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:19:45.874 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:19:45.874 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:19:45.874 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:19:45.874 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:19:45.874 23:06:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:19:45.874 23:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.874 23:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.874 23:06:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:19:45.874 23:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:45.874 23:06:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.874 23:06:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:19:45.874 23:06:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:19:46.131 23:06:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:19:46.131 23:06:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:19:46.131 23:06:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:19:46.131 23:06:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.131 23:06:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:46.131 23:06:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:19:46.131 23:06:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:46.131 23:06:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:46.131 23:06:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:19:46.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:19:46.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:19:46.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:19:46.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:19:46.131 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:19:46.131 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:19:46.131 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:19:46.131 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:19:46.131 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:19:46.131 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:19:46.131 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:19:46.131 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:19:46.131 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:19:46.131 ' 00:19:52.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:19:52.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:19:52.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:19:52.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:19:52.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:19:52.795 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:19:52.795 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:19:52.795 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:19:52.795 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:19:52.795 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:19:52.795 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:19:52.795 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:19:52.795 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:19:52.795 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 90654 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 90654 ']' 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 90654 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90654 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:52.795 killing process with pid 90654 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90654' 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 90654 00:19:52.795 [2024-05-14 23:07:04.116319] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 90654 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 90654 ']' 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 90654 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 90654 ']' 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 90654 00:19:52.795 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90654) - No such process 00:19:52.795 Process with pid 90654 is not found 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 90654 is not found' 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:52.795 00:19:52.795 real 0m16.931s 00:19:52.795 user 0m36.646s 00:19:52.795 sys 0m0.925s 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:52.795 23:07:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:52.795 ************************************ 00:19:52.795 END TEST spdkcli_nvmf_tcp 00:19:52.795 ************************************ 00:19:52.795 23:07:04 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:19:52.795 23:07:04 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:52.795 23:07:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:52.795 23:07:04 -- common/autotest_common.sh@10 -- # set +x 00:19:52.795 ************************************ 00:19:52.795 START TEST nvmf_identify_passthru 00:19:52.795 ************************************ 00:19:52.795 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:19:52.795 * Looking for test storage... 00:19:52.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:52.795 23:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.795 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:52.795 23:07:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.795 23:07:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.795 23:07:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.795 23:07:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.795 23:07:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.795 23:07:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.795 23:07:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:19:52.796 23:07:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.796 23:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:52.796 23:07:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.796 23:07:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.796 23:07:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.796 23:07:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.796 23:07:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.796 23:07:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.796 23:07:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:19:52.796 23:07:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.796 23:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.796 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:52.796 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:52.796 Cannot find device "nvmf_tgt_br" 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.796 Cannot find device "nvmf_tgt_br2" 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:52.796 Cannot find device "nvmf_tgt_br" 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:52.796 Cannot find device "nvmf_tgt_br2" 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:52.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:19:52.796 00:19:52.796 --- 10.0.0.2 ping statistics --- 00:19:52.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.796 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:52.796 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:52.796 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:19:52.796 00:19:52.796 --- 10.0.0.3 ping statistics --- 00:19:52.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.796 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:52.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:52.796 00:19:52.796 --- 10.0.0.1 ping statistics --- 00:19:52.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.796 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.796 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.797 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.797 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.797 23:07:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.797 23:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:52.797 23:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:52.797 23:07:04 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:19:52.797 23:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:19:52.797 23:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:19:52.797 23:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:19:52.797 23:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:52.797 23:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:19:52.797 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:19:52.797 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:52.797 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:19:52.797 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:19:53.055 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:19:53.055 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:19:53.055 23:07:05 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.055 23:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:53.055 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:19:53.055 23:07:05 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:53.055 23:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:53.055 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=91133 00:19:53.055 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:53.055 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:53.055 23:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 91133 00:19:53.055 23:07:05 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 91133 ']' 00:19:53.055 23:07:05 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.055 23:07:05 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:53.055 23:07:05 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.055 23:07:05 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:53.055 23:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:53.055 [2024-05-14 23:07:05.361280] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:53.055 [2024-05-14 23:07:05.361418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.313 [2024-05-14 23:07:05.501688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.313 [2024-05-14 23:07:05.575438] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.313 [2024-05-14 23:07:05.575519] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.313 [2024-05-14 23:07:05.575542] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.313 [2024-05-14 23:07:05.575559] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.313 [2024-05-14 23:07:05.575573] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.313 [2024-05-14 23:07:05.576283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.313 [2024-05-14 23:07:05.576385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.313 [2024-05-14 23:07:05.576467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.313 [2024-05-14 23:07:05.576474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:19:54.248 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.248 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:54.248 [2024-05-14 23:07:06.503629] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.248 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:54.248 [2024-05-14 23:07:06.512898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.248 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:54.248 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:54.248 Nvme0n1 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.248 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.248 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.248 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:54.506 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.506 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:54.506 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.507 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:54.507 [2024-05-14 23:07:06.649006] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:54.507 [2024-05-14 23:07:06.649592] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.507 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.507 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:19:54.507 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.507 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:54.507 [ 00:19:54.507 { 00:19:54.507 "allow_any_host": true, 00:19:54.507 "hosts": [], 00:19:54.507 "listen_addresses": [], 00:19:54.507 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:54.507 "subtype": "Discovery" 00:19:54.507 }, 00:19:54.507 { 00:19:54.507 "allow_any_host": true, 00:19:54.507 "hosts": [], 00:19:54.507 "listen_addresses": [ 00:19:54.507 { 00:19:54.507 "adrfam": "IPv4", 00:19:54.507 "traddr": "10.0.0.2", 00:19:54.507 "trsvcid": "4420", 00:19:54.507 "trtype": "TCP" 00:19:54.507 } 00:19:54.507 ], 00:19:54.507 "max_cntlid": 65519, 00:19:54.507 "max_namespaces": 1, 00:19:54.507 "min_cntlid": 1, 00:19:54.507 "model_number": "SPDK bdev Controller", 00:19:54.507 "namespaces": [ 00:19:54.507 { 00:19:54.507 "bdev_name": "Nvme0n1", 00:19:54.507 "name": "Nvme0n1", 00:19:54.507 "nguid": "AA1CAD529A174EE18709593022B67478", 00:19:54.507 "nsid": 1, 00:19:54.507 "uuid": "aa1cad52-9a17-4ee1-8709-593022b67478" 00:19:54.507 } 00:19:54.507 ], 00:19:54.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.507 "serial_number": "SPDK00000000000001", 00:19:54.507 "subtype": "NVMe" 00:19:54.507 } 00:19:54.507 ] 00:19:54.507 23:07:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.507 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:54.507 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:19:54.507 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:19:54.507 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:19:54.507 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:54.507 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:19:54.507 23:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:19:54.765 23:07:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:19:54.765 23:07:07 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:19:54.765 23:07:07 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:19:54.765 23:07:07 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:54.765 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.765 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:54.765 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.765 23:07:07 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:19:54.765 23:07:07 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:19:54.765 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:54.765 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:19:55.024 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:55.024 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:19:55.024 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:55.024 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:55.024 rmmod nvme_tcp 00:19:55.024 rmmod nvme_fabrics 00:19:55.024 rmmod nvme_keyring 00:19:55.024 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:55.024 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:19:55.024 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:19:55.024 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 91133 ']' 00:19:55.024 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 91133 00:19:55.024 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 91133 ']' 00:19:55.024 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 91133 00:19:55.024 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:19:55.024 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:55.024 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91133 00:19:55.024 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:55.024 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:55.024 killing process with pid 91133 00:19:55.024 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91133' 00:19:55.024 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 91133 00:19:55.024 [2024-05-14 23:07:07.258958] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:55.024 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 91133 00:19:55.283 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:55.283 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:55.283 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:55.283 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.283 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:55.283 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.283 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:55.283 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.283 23:07:07 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:55.283 ************************************ 00:19:55.283 END TEST nvmf_identify_passthru 00:19:55.283 ************************************ 00:19:55.283 00:19:55.283 real 0m3.117s 00:19:55.283 user 0m7.973s 00:19:55.283 sys 0m0.771s 00:19:55.283 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:55.283 23:07:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:19:55.283 23:07:07 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:55.283 23:07:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:55.283 23:07:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:55.283 23:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:55.283 ************************************ 00:19:55.283 START TEST nvmf_dif 00:19:55.283 ************************************ 00:19:55.283 23:07:07 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:55.283 * Looking for test storage... 00:19:55.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:55.283 23:07:07 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.283 23:07:07 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:55.283 23:07:07 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.283 23:07:07 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.283 23:07:07 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.283 23:07:07 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.283 23:07:07 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.283 23:07:07 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.283 23:07:07 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:55.284 23:07:07 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.284 23:07:07 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:55.284 23:07:07 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:55.284 23:07:07 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:55.284 23:07:07 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:55.284 23:07:07 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.284 23:07:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:55.284 23:07:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:55.284 Cannot find device "nvmf_tgt_br" 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@155 -- # true 00:19:55.284 23:07:07 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:55.543 Cannot find device "nvmf_tgt_br2" 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@156 -- # true 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:55.543 Cannot find device "nvmf_tgt_br" 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@158 -- # true 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:55.543 Cannot find device "nvmf_tgt_br2" 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@159 -- # true 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:55.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:55.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:55.543 23:07:07 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:55.803 23:07:07 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:55.803 23:07:07 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:55.803 23:07:07 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:55.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:19:55.803 00:19:55.803 --- 10.0.0.2 ping statistics --- 00:19:55.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.804 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:55.804 23:07:07 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:55.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:55.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:19:55.804 00:19:55.804 --- 10.0.0.3 ping statistics --- 00:19:55.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.804 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:55.804 23:07:07 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:55.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:55.804 00:19:55.804 --- 10.0.0.1 ping statistics --- 00:19:55.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.804 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:55.804 23:07:07 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.804 23:07:07 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:55.804 23:07:07 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:55.804 23:07:07 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:56.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:56.061 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:56.061 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:56.061 23:07:08 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.061 23:07:08 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:56.061 23:07:08 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:56.061 23:07:08 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.061 23:07:08 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:56.061 23:07:08 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:56.061 23:07:08 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:56.061 23:07:08 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:56.061 23:07:08 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:56.061 23:07:08 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:56.061 23:07:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:56.061 23:07:08 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=91474 00:19:56.061 23:07:08 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 91474 00:19:56.061 23:07:08 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 91474 ']' 00:19:56.061 23:07:08 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:56.061 23:07:08 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.061 23:07:08 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:56.061 23:07:08 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.061 23:07:08 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:56.061 23:07:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:56.061 [2024-05-14 23:07:08.431269] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:56.061 [2024-05-14 23:07:08.431363] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.319 [2024-05-14 23:07:08.573107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.319 [2024-05-14 23:07:08.642180] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.319 [2024-05-14 23:07:08.642236] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.319 [2024-05-14 23:07:08.642249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.319 [2024-05-14 23:07:08.642259] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.319 [2024-05-14 23:07:08.642268] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.319 [2024-05-14 23:07:08.642301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.256 23:07:09 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:57.256 23:07:09 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:19:57.256 23:07:09 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.256 23:07:09 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:57.256 23:07:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:57.256 23:07:09 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.256 23:07:09 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:57.256 23:07:09 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:57.256 23:07:09 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.256 23:07:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:57.256 [2024-05-14 23:07:09.536390] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.256 23:07:09 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.256 23:07:09 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:57.256 23:07:09 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:19:57.256 23:07:09 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:57.256 23:07:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:57.256 ************************************ 00:19:57.256 START TEST fio_dif_1_default 00:19:57.256 ************************************ 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:57.256 bdev_null0 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:57.256 [2024-05-14 23:07:09.580318] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:57.256 [2024-05-14 23:07:09.580527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.256 { 00:19:57.256 "params": { 00:19:57.256 "name": "Nvme$subsystem", 00:19:57.256 "trtype": "$TEST_TRANSPORT", 00:19:57.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.256 "adrfam": "ipv4", 00:19:57.256 "trsvcid": "$NVMF_PORT", 00:19:57.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.256 "hdgst": ${hdgst:-false}, 00:19:57.256 "ddgst": ${ddgst:-false} 00:19:57.256 }, 00:19:57.256 "method": "bdev_nvme_attach_controller" 00:19:57.256 } 00:19:57.256 EOF 00:19:57.256 )") 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.256 23:07:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:57.257 "params": { 00:19:57.257 "name": "Nvme0", 00:19:57.257 "trtype": "tcp", 00:19:57.257 "traddr": "10.0.0.2", 00:19:57.257 "adrfam": "ipv4", 00:19:57.257 "trsvcid": "4420", 00:19:57.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:57.257 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:57.257 "hdgst": false, 00:19:57.257 "ddgst": false 00:19:57.257 }, 00:19:57.257 "method": "bdev_nvme_attach_controller" 00:19:57.257 }' 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:19:57.257 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:19:57.515 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:19:57.515 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:19:57.515 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:57.515 23:07:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:57.515 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:57.515 fio-3.35 00:19:57.515 Starting 1 thread 00:20:09.720 00:20:09.720 filename0: (groupid=0, jobs=1): err= 0: pid=91564: Tue May 14 23:07:20 2024 00:20:09.720 read: IOPS=1982, BW=7929KiB/s (8119kB/s)(77.4MiB/10001msec) 00:20:09.720 slat (nsec): min=6153, max=54448, avg=9163.84, stdev=3454.51 00:20:09.720 clat (usec): min=425, max=41649, avg=1990.56, stdev=7629.62 00:20:09.720 lat (usec): min=432, max=41660, avg=1999.73, stdev=7629.77 00:20:09.720 clat percentiles (usec): 00:20:09.720 | 1.00th=[ 457], 5.00th=[ 461], 10.00th=[ 469], 20.00th=[ 474], 00:20:09.720 | 30.00th=[ 482], 40.00th=[ 486], 50.00th=[ 490], 60.00th=[ 498], 00:20:09.720 | 70.00th=[ 502], 80.00th=[ 515], 90.00th=[ 537], 95.00th=[ 644], 00:20:09.720 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:20:09.720 | 99.99th=[41681] 00:20:09.720 bw ( KiB/s): min= 4736, max=11776, per=96.12%, avg=7621.32, stdev=2334.74, samples=19 00:20:09.720 iops : min= 1184, max= 2944, avg=1905.32, stdev=583.69, samples=19 00:20:09.720 lat (usec) : 500=66.91%, 750=29.24%, 1000=0.13% 00:20:09.720 lat (msec) : 10=0.02%, 50=3.69% 00:20:09.720 cpu : usr=90.19%, sys=8.73%, ctx=30, majf=0, minf=9 00:20:09.720 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.720 issued rwts: total=19824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.720 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:09.720 00:20:09.720 Run status group 0 (all jobs): 00:20:09.720 READ: bw=7929KiB/s (8119kB/s), 7929KiB/s-7929KiB/s (8119kB/s-8119kB/s), io=77.4MiB (81.2MB), run=10001-10001msec 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.720 00:20:09.720 real 0m10.928s 00:20:09.720 user 0m9.632s 00:20:09.720 sys 0m1.112s 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:09.720 ************************************ 00:20:09.720 END TEST fio_dif_1_default 00:20:09.720 ************************************ 00:20:09.720 23:07:20 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:09.720 23:07:20 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:09.720 23:07:20 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:09.720 23:07:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:09.720 ************************************ 00:20:09.720 START TEST fio_dif_1_multi_subsystems 00:20:09.720 ************************************ 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:09.720 bdev_null0 00:20:09.720 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:09.721 [2024-05-14 23:07:20.561626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:09.721 bdev_null1 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.721 { 00:20:09.721 "params": { 00:20:09.721 "name": "Nvme$subsystem", 00:20:09.721 "trtype": "$TEST_TRANSPORT", 00:20:09.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.721 "adrfam": "ipv4", 00:20:09.721 "trsvcid": "$NVMF_PORT", 00:20:09.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.721 "hdgst": ${hdgst:-false}, 00:20:09.721 "ddgst": ${ddgst:-false} 00:20:09.721 }, 00:20:09.721 "method": "bdev_nvme_attach_controller" 00:20:09.721 } 00:20:09.721 EOF 00:20:09.721 )") 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.721 { 00:20:09.721 "params": { 00:20:09.721 "name": "Nvme$subsystem", 00:20:09.721 "trtype": "$TEST_TRANSPORT", 00:20:09.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.721 "adrfam": "ipv4", 00:20:09.721 "trsvcid": "$NVMF_PORT", 00:20:09.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.721 "hdgst": ${hdgst:-false}, 00:20:09.721 "ddgst": ${ddgst:-false} 00:20:09.721 }, 00:20:09.721 "method": "bdev_nvme_attach_controller" 00:20:09.721 } 00:20:09.721 EOF 00:20:09.721 )") 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:09.721 "params": { 00:20:09.721 "name": "Nvme0", 00:20:09.721 "trtype": "tcp", 00:20:09.721 "traddr": "10.0.0.2", 00:20:09.721 "adrfam": "ipv4", 00:20:09.721 "trsvcid": "4420", 00:20:09.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:09.721 "hdgst": false, 00:20:09.721 "ddgst": false 00:20:09.721 }, 00:20:09.721 "method": "bdev_nvme_attach_controller" 00:20:09.721 },{ 00:20:09.721 "params": { 00:20:09.721 "name": "Nvme1", 00:20:09.721 "trtype": "tcp", 00:20:09.721 "traddr": "10.0.0.2", 00:20:09.721 "adrfam": "ipv4", 00:20:09.721 "trsvcid": "4420", 00:20:09.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.721 "hdgst": false, 00:20:09.721 "ddgst": false 00:20:09.721 }, 00:20:09.721 "method": "bdev_nvme_attach_controller" 00:20:09.721 }' 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:09.721 23:07:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:09.721 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:09.721 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:09.721 fio-3.35 00:20:09.721 Starting 2 threads 00:20:19.744 00:20:19.744 filename0: (groupid=0, jobs=1): err= 0: pid=91726: Tue May 14 23:07:31 2024 00:20:19.744 read: IOPS=222, BW=891KiB/s (913kB/s)(8944KiB/10035msec) 00:20:19.744 slat (nsec): min=5317, max=67113, avg=12631.67, stdev=10036.58 00:20:19.744 clat (usec): min=471, max=41848, avg=17907.52, stdev=20015.90 00:20:19.744 lat (usec): min=479, max=41910, avg=17920.16, stdev=20016.60 00:20:19.744 clat percentiles (usec): 00:20:19.744 | 1.00th=[ 494], 5.00th=[ 515], 10.00th=[ 529], 20.00th=[ 553], 00:20:19.744 | 30.00th=[ 594], 40.00th=[ 644], 50.00th=[ 693], 60.00th=[40633], 00:20:19.744 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:19.744 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:19.744 | 99.99th=[41681] 00:20:19.744 bw ( KiB/s): min= 512, max= 3200, per=49.95%, avg=892.80, stdev=580.00, samples=20 00:20:19.744 iops : min= 128, max= 800, avg=223.20, stdev=145.00, samples=20 00:20:19.744 lat (usec) : 500=2.33%, 750=50.89%, 1000=2.59% 00:20:19.744 lat (msec) : 2=1.43%, 50=42.75% 00:20:19.744 cpu : usr=94.98%, sys=4.41%, ctx=15, majf=0, minf=0 00:20:19.745 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.745 issued rwts: total=2236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.745 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:19.745 filename1: (groupid=0, jobs=1): err= 0: pid=91727: Tue May 14 23:07:31 2024 00:20:19.745 read: IOPS=223, BW=895KiB/s (916kB/s)(8976KiB/10033msec) 00:20:19.745 slat (nsec): min=7881, max=71861, avg=11812.37, stdev=7907.80 00:20:19.745 clat (usec): min=470, max=42674, avg=17844.32, stdev=20017.31 00:20:19.745 lat (usec): min=478, max=42696, avg=17856.13, stdev=20018.03 00:20:19.745 clat percentiles (usec): 00:20:19.745 | 1.00th=[ 482], 5.00th=[ 502], 10.00th=[ 515], 20.00th=[ 537], 00:20:19.745 | 30.00th=[ 586], 40.00th=[ 644], 50.00th=[ 676], 60.00th=[40633], 00:20:19.745 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:19.745 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:20:19.745 | 99.99th=[42730] 00:20:19.745 bw ( KiB/s): min= 544, max= 2592, per=50.18%, avg=896.00, stdev=458.70, samples=20 00:20:19.745 iops : min= 136, max= 648, avg=224.00, stdev=114.67, samples=20 00:20:19.745 lat (usec) : 500=4.90%, 750=49.87%, 1000=0.85% 00:20:19.745 lat (msec) : 2=1.78%, 50=42.60% 00:20:19.745 cpu : usr=94.66%, sys=4.72%, ctx=88, majf=0, minf=9 00:20:19.745 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.745 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.745 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:19.745 00:20:19.745 Run status group 0 (all jobs): 00:20:19.745 READ: bw=1786KiB/s (1829kB/s), 891KiB/s-895KiB/s (913kB/s-916kB/s), io=17.5MiB (18.3MB), run=10033-10035msec 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.745 00:20:19.745 real 0m11.141s 00:20:19.745 user 0m19.799s 00:20:19.745 sys 0m1.167s 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:19.745 23:07:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 ************************************ 00:20:19.745 END TEST fio_dif_1_multi_subsystems 00:20:19.745 ************************************ 00:20:19.745 23:07:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:19.745 23:07:31 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:19.745 23:07:31 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:19.745 23:07:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 ************************************ 00:20:19.745 START TEST fio_dif_rand_params 00:20:19.745 ************************************ 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 bdev_null0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.745 [2024-05-14 23:07:31.747843] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.745 { 00:20:19.745 "params": { 00:20:19.745 "name": "Nvme$subsystem", 00:20:19.745 "trtype": "$TEST_TRANSPORT", 00:20:19.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.745 "adrfam": "ipv4", 00:20:19.745 "trsvcid": "$NVMF_PORT", 00:20:19.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.745 "hdgst": ${hdgst:-false}, 00:20:19.745 "ddgst": ${ddgst:-false} 00:20:19.745 }, 00:20:19.745 "method": "bdev_nvme_attach_controller" 00:20:19.745 } 00:20:19.745 EOF 00:20:19.745 )") 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:19.745 23:07:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:19.745 "params": { 00:20:19.746 "name": "Nvme0", 00:20:19.746 "trtype": "tcp", 00:20:19.746 "traddr": "10.0.0.2", 00:20:19.746 "adrfam": "ipv4", 00:20:19.746 "trsvcid": "4420", 00:20:19.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:19.746 "hdgst": false, 00:20:19.746 "ddgst": false 00:20:19.746 }, 00:20:19.746 "method": "bdev_nvme_attach_controller" 00:20:19.746 }' 00:20:19.746 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:19.746 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:19.746 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.746 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.746 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:19.746 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:19.746 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:19.746 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:19.746 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:19.746 23:07:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.746 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:19.746 ... 00:20:19.746 fio-3.35 00:20:19.746 Starting 3 threads 00:20:25.096 00:20:25.096 filename0: (groupid=0, jobs=1): err= 0: pid=91883: Tue May 14 23:07:37 2024 00:20:25.096 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(129MiB/5007msec) 00:20:25.096 slat (nsec): min=7926, max=47214, avg=15279.28, stdev=6932.00 00:20:25.096 clat (usec): min=8618, max=23688, avg=14561.87, stdev=2904.72 00:20:25.096 lat (usec): min=8626, max=23705, avg=14577.15, stdev=2905.34 00:20:25.096 clat percentiles (usec): 00:20:25.096 | 1.00th=[ 8848], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10945], 00:20:25.096 | 30.00th=[14091], 40.00th=[14877], 50.00th=[15270], 60.00th=[15664], 00:20:25.096 | 70.00th=[15926], 80.00th=[16450], 90.00th=[17433], 95.00th=[19006], 00:20:25.096 | 99.00th=[20841], 99.50th=[21890], 99.90th=[22414], 99.95th=[23725], 00:20:25.096 | 99.99th=[23725] 00:20:25.096 bw ( KiB/s): min=22016, max=30720, per=30.76%, avg=26291.20, stdev=2443.72, samples=10 00:20:25.096 iops : min= 172, max= 240, avg=205.40, stdev=19.09, samples=10 00:20:25.096 lat (msec) : 10=11.95%, 20=85.91%, 50=2.14% 00:20:25.096 cpu : usr=91.63%, sys=6.55%, ctx=6, majf=0, minf=0 00:20:25.096 IO depths : 1=16.5%, 2=83.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.096 issued rwts: total=1029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.096 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:25.096 filename0: (groupid=0, jobs=1): err= 0: pid=91884: Tue May 14 23:07:37 2024 00:20:25.096 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(135MiB/5008msec) 00:20:25.096 slat (nsec): min=4976, max=70424, avg=16895.59, stdev=7643.46 00:20:25.096 clat (usec): min=6196, max=55398, avg=13885.51, stdev=7077.40 00:20:25.096 lat (usec): min=6209, max=55406, avg=13902.40, stdev=7077.69 00:20:25.096 clat percentiles (usec): 00:20:25.096 | 1.00th=[ 6783], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[11469], 00:20:25.096 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13435], 00:20:25.096 | 70.00th=[13960], 80.00th=[14615], 90.00th=[15664], 95.00th=[17171], 00:20:25.096 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:20:25.096 | 99.99th=[55313] 00:20:25.096 bw ( KiB/s): min=22528, max=35584, per=32.26%, avg=27571.20, stdev=4427.56, samples=10 00:20:25.096 iops : min= 176, max= 278, avg=215.40, stdev=34.59, samples=10 00:20:25.096 lat (msec) : 10=12.41%, 20=84.81%, 100=2.78% 00:20:25.096 cpu : usr=91.29%, sys=6.89%, ctx=21, majf=0, minf=0 00:20:25.096 IO depths : 1=4.4%, 2=95.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.096 issued rwts: total=1080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.096 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:25.096 filename0: (groupid=0, jobs=1): err= 0: pid=91885: Tue May 14 23:07:37 2024 00:20:25.096 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(154MiB/5007msec) 00:20:25.096 slat (nsec): min=4920, max=58070, avg=17494.83, stdev=7267.22 00:20:25.096 clat (usec): min=5713, max=54678, avg=12135.70, stdev=6256.03 00:20:25.096 lat (usec): min=5725, max=54701, avg=12153.19, stdev=6256.88 00:20:25.096 clat percentiles (usec): 00:20:25.096 | 1.00th=[ 6980], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[10159], 00:20:25.096 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:20:25.096 | 70.00th=[11994], 80.00th=[12518], 90.00th=[13960], 95.00th=[15139], 00:20:25.096 | 99.00th=[52167], 99.50th=[53216], 99.90th=[53216], 99.95th=[54789], 00:20:25.096 | 99.99th=[54789] 00:20:25.096 bw ( KiB/s): min=23808, max=36864, per=36.93%, avg=31564.80, stdev=4450.52, samples=10 00:20:25.096 iops : min= 186, max= 288, avg=246.60, stdev=34.77, samples=10 00:20:25.096 lat (msec) : 10=18.62%, 20=79.19%, 100=2.19% 00:20:25.096 cpu : usr=90.61%, sys=7.29%, ctx=5, majf=0, minf=0 00:20:25.096 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.096 issued rwts: total=1235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.096 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:25.096 00:20:25.096 Run status group 0 (all jobs): 00:20:25.096 READ: bw=83.5MiB/s (87.5MB/s), 25.7MiB/s-30.8MiB/s (26.9MB/s-32.3MB/s), io=418MiB (438MB), run=5007-5008msec 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.355 bdev_null0 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.355 [2024-05-14 23:07:37.669589] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:25.355 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.356 bdev_null1 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.356 bdev_null2 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.356 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.614 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.615 { 00:20:25.615 "params": { 00:20:25.615 "name": "Nvme$subsystem", 00:20:25.615 "trtype": "$TEST_TRANSPORT", 00:20:25.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.615 "adrfam": "ipv4", 00:20:25.615 "trsvcid": "$NVMF_PORT", 00:20:25.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.615 "hdgst": ${hdgst:-false}, 00:20:25.615 "ddgst": ${ddgst:-false} 00:20:25.615 }, 00:20:25.615 "method": "bdev_nvme_attach_controller" 00:20:25.615 } 00:20:25.615 EOF 00:20:25.615 )") 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.615 { 00:20:25.615 "params": { 00:20:25.615 "name": "Nvme$subsystem", 00:20:25.615 "trtype": "$TEST_TRANSPORT", 00:20:25.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.615 "adrfam": "ipv4", 00:20:25.615 "trsvcid": "$NVMF_PORT", 00:20:25.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.615 "hdgst": ${hdgst:-false}, 00:20:25.615 "ddgst": ${ddgst:-false} 00:20:25.615 }, 00:20:25.615 "method": "bdev_nvme_attach_controller" 00:20:25.615 } 00:20:25.615 EOF 00:20:25.615 )") 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.615 { 00:20:25.615 "params": { 00:20:25.615 "name": "Nvme$subsystem", 00:20:25.615 "trtype": "$TEST_TRANSPORT", 00:20:25.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.615 "adrfam": "ipv4", 00:20:25.615 "trsvcid": "$NVMF_PORT", 00:20:25.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.615 "hdgst": ${hdgst:-false}, 00:20:25.615 "ddgst": ${ddgst:-false} 00:20:25.615 }, 00:20:25.615 "method": "bdev_nvme_attach_controller" 00:20:25.615 } 00:20:25.615 EOF 00:20:25.615 )") 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:25.615 "params": { 00:20:25.615 "name": "Nvme0", 00:20:25.615 "trtype": "tcp", 00:20:25.615 "traddr": "10.0.0.2", 00:20:25.615 "adrfam": "ipv4", 00:20:25.615 "trsvcid": "4420", 00:20:25.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.615 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:25.615 "hdgst": false, 00:20:25.615 "ddgst": false 00:20:25.615 }, 00:20:25.615 "method": "bdev_nvme_attach_controller" 00:20:25.615 },{ 00:20:25.615 "params": { 00:20:25.615 "name": "Nvme1", 00:20:25.615 "trtype": "tcp", 00:20:25.615 "traddr": "10.0.0.2", 00:20:25.615 "adrfam": "ipv4", 00:20:25.615 "trsvcid": "4420", 00:20:25.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.615 "hdgst": false, 00:20:25.615 "ddgst": false 00:20:25.615 }, 00:20:25.615 "method": "bdev_nvme_attach_controller" 00:20:25.615 },{ 00:20:25.615 "params": { 00:20:25.615 "name": "Nvme2", 00:20:25.615 "trtype": "tcp", 00:20:25.615 "traddr": "10.0.0.2", 00:20:25.615 "adrfam": "ipv4", 00:20:25.615 "trsvcid": "4420", 00:20:25.615 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:25.615 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:25.615 "hdgst": false, 00:20:25.615 "ddgst": false 00:20:25.615 }, 00:20:25.615 "method": "bdev_nvme_attach_controller" 00:20:25.615 }' 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:25.615 23:07:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.615 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:25.615 ... 00:20:25.615 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:25.615 ... 00:20:25.615 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:25.615 ... 00:20:25.615 fio-3.35 00:20:25.615 Starting 24 threads 00:20:37.844 00:20:37.844 filename0: (groupid=0, jobs=1): err= 0: pid=91981: Tue May 14 23:07:48 2024 00:20:37.844 read: IOPS=135, BW=542KiB/s (555kB/s)(5440KiB/10045msec) 00:20:37.844 slat (usec): min=4, max=8054, avg=42.78, stdev=372.46 00:20:37.844 clat (msec): min=57, max=220, avg=117.76, stdev=31.00 00:20:37.844 lat (msec): min=57, max=220, avg=117.80, stdev=30.99 00:20:37.844 clat percentiles (msec): 00:20:37.844 | 1.00th=[ 67], 5.00th=[ 72], 10.00th=[ 88], 20.00th=[ 99], 00:20:37.844 | 30.00th=[ 102], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 117], 00:20:37.844 | 70.00th=[ 128], 80.00th=[ 144], 90.00th=[ 159], 95.00th=[ 180], 00:20:37.844 | 99.00th=[ 207], 99.50th=[ 222], 99.90th=[ 222], 99.95th=[ 222], 00:20:37.844 | 99.99th=[ 222] 00:20:37.844 bw ( KiB/s): min= 383, max= 640, per=3.67%, avg=537.55, stdev=99.21, samples=20 00:20:37.844 iops : min= 95, max= 160, avg=134.35, stdev=24.86, samples=20 00:20:37.844 lat (msec) : 100=25.37%, 250=74.63% 00:20:37.844 cpu : usr=41.57%, sys=1.91%, ctx=1277, majf=0, minf=9 00:20:37.844 IO depths : 1=3.3%, 2=7.5%, 4=18.8%, 8=61.0%, 16=9.4%, 32=0.0%, >=64=0.0% 00:20:37.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 issued rwts: total=1360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.844 filename0: (groupid=0, jobs=1): err= 0: pid=91982: Tue May 14 23:07:48 2024 00:20:37.844 read: IOPS=136, BW=545KiB/s (558kB/s)(5464KiB/10023msec) 00:20:37.844 slat (usec): min=9, max=8070, avg=53.40, stdev=434.21 00:20:37.844 clat (msec): min=47, max=216, avg=117.05, stdev=34.06 00:20:37.844 lat (msec): min=47, max=216, avg=117.11, stdev=34.06 00:20:37.844 clat percentiles (msec): 00:20:37.844 | 1.00th=[ 61], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 95], 00:20:37.844 | 30.00th=[ 97], 40.00th=[ 107], 50.00th=[ 108], 60.00th=[ 121], 00:20:37.844 | 70.00th=[ 132], 80.00th=[ 153], 90.00th=[ 167], 95.00th=[ 180], 00:20:37.844 | 99.00th=[ 203], 99.50th=[ 215], 99.90th=[ 218], 99.95th=[ 218], 00:20:37.844 | 99.99th=[ 218] 00:20:37.844 bw ( KiB/s): min= 432, max= 688, per=3.70%, avg=541.47, stdev=67.52, samples=19 00:20:37.844 iops : min= 108, max= 172, avg=135.37, stdev=16.88, samples=19 00:20:37.844 lat (msec) : 50=0.66%, 100=33.89%, 250=65.45% 00:20:37.844 cpu : usr=31.65%, sys=1.37%, ctx=865, majf=0, minf=9 00:20:37.844 IO depths : 1=1.8%, 2=4.1%, 4=12.2%, 8=70.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:20:37.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 complete : 0=0.0%, 4=90.8%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 issued rwts: total=1366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.844 filename0: (groupid=0, jobs=1): err= 0: pid=91983: Tue May 14 23:07:48 2024 00:20:37.844 read: IOPS=142, BW=571KiB/s (584kB/s)(5744KiB/10065msec) 00:20:37.844 slat (usec): min=4, max=8067, avg=50.65, stdev=423.28 00:20:37.844 clat (msec): min=33, max=213, avg=111.77, stdev=33.12 00:20:37.844 lat (msec): min=33, max=213, avg=111.82, stdev=33.12 00:20:37.844 clat percentiles (msec): 00:20:37.844 | 1.00th=[ 48], 5.00th=[ 64], 10.00th=[ 72], 20.00th=[ 84], 00:20:37.844 | 30.00th=[ 96], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 111], 00:20:37.844 | 70.00th=[ 127], 80.00th=[ 144], 90.00th=[ 157], 95.00th=[ 167], 00:20:37.844 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 213], 99.95th=[ 213], 00:20:37.844 | 99.99th=[ 213] 00:20:37.844 bw ( KiB/s): min= 384, max= 792, per=3.88%, avg=567.90, stdev=119.78, samples=20 00:20:37.844 iops : min= 96, max= 198, avg=141.90, stdev=29.97, samples=20 00:20:37.844 lat (msec) : 50=1.46%, 100=36.56%, 250=61.98% 00:20:37.844 cpu : usr=32.37%, sys=1.11%, ctx=903, majf=0, minf=9 00:20:37.844 IO depths : 1=1.7%, 2=4.0%, 4=13.8%, 8=69.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:20:37.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 complete : 0=0.0%, 4=90.4%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 issued rwts: total=1436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.844 filename0: (groupid=0, jobs=1): err= 0: pid=91984: Tue May 14 23:07:48 2024 00:20:37.844 read: IOPS=143, BW=572KiB/s (586kB/s)(5760KiB/10069msec) 00:20:37.844 slat (usec): min=5, max=4044, avg=25.75, stdev=127.21 00:20:37.844 clat (msec): min=60, max=209, avg=111.64, stdev=25.51 00:20:37.844 lat (msec): min=60, max=209, avg=111.67, stdev=25.51 00:20:37.844 clat percentiles (msec): 00:20:37.844 | 1.00th=[ 68], 5.00th=[ 72], 10.00th=[ 84], 20.00th=[ 95], 00:20:37.844 | 30.00th=[ 99], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:20:37.844 | 70.00th=[ 120], 80.00th=[ 134], 90.00th=[ 153], 95.00th=[ 157], 00:20:37.844 | 99.00th=[ 178], 99.50th=[ 194], 99.90th=[ 209], 99.95th=[ 209], 00:20:37.844 | 99.99th=[ 209] 00:20:37.844 bw ( KiB/s): min= 352, max= 640, per=3.89%, avg=569.55, stdev=82.14, samples=20 00:20:37.844 iops : min= 88, max= 160, avg=142.35, stdev=20.56, samples=20 00:20:37.844 lat (msec) : 100=34.65%, 250=65.35% 00:20:37.844 cpu : usr=39.62%, sys=1.44%, ctx=1150, majf=0, minf=9 00:20:37.844 IO depths : 1=3.7%, 2=8.3%, 4=19.9%, 8=59.2%, 16=9.0%, 32=0.0%, >=64=0.0% 00:20:37.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 complete : 0=0.0%, 4=92.7%, 8=1.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 issued rwts: total=1440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.844 filename0: (groupid=0, jobs=1): err= 0: pid=91985: Tue May 14 23:07:48 2024 00:20:37.844 read: IOPS=144, BW=577KiB/s (591kB/s)(5808KiB/10062msec) 00:20:37.844 slat (usec): min=4, max=8059, avg=34.13, stdev=234.56 00:20:37.844 clat (msec): min=46, max=206, avg=110.47, stdev=27.58 00:20:37.844 lat (msec): min=46, max=206, avg=110.50, stdev=27.58 00:20:37.844 clat percentiles (msec): 00:20:37.844 | 1.00th=[ 48], 5.00th=[ 65], 10.00th=[ 72], 20.00th=[ 86], 00:20:37.844 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 120], 00:20:37.844 | 70.00th=[ 121], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 153], 00:20:37.844 | 99.00th=[ 182], 99.50th=[ 184], 99.90th=[ 207], 99.95th=[ 207], 00:20:37.844 | 99.99th=[ 207] 00:20:37.844 bw ( KiB/s): min= 383, max= 769, per=3.92%, avg=574.45, stdev=100.14, samples=20 00:20:37.844 iops : min= 95, max= 192, avg=143.55, stdev=25.09, samples=20 00:20:37.844 lat (msec) : 50=1.03%, 100=36.78%, 250=62.19% 00:20:37.844 cpu : usr=35.75%, sys=1.31%, ctx=982, majf=0, minf=9 00:20:37.844 IO depths : 1=3.3%, 2=7.1%, 4=17.4%, 8=62.7%, 16=9.4%, 32=0.0%, >=64=0.0% 00:20:37.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 issued rwts: total=1452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.844 filename0: (groupid=0, jobs=1): err= 0: pid=91986: Tue May 14 23:07:48 2024 00:20:37.844 read: IOPS=193, BW=774KiB/s (793kB/s)(7800KiB/10077msec) 00:20:37.844 slat (usec): min=4, max=4046, avg=17.25, stdev=91.90 00:20:37.844 clat (msec): min=3, max=183, avg=82.39, stdev=28.93 00:20:37.844 lat (msec): min=3, max=183, avg=82.41, stdev=28.93 00:20:37.844 clat percentiles (msec): 00:20:37.844 | 1.00th=[ 7], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 64], 00:20:37.844 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 84], 00:20:37.844 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 121], 95.00th=[ 133], 00:20:37.844 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 184], 99.95th=[ 184], 00:20:37.844 | 99.99th=[ 184] 00:20:37.844 bw ( KiB/s): min= 480, max= 1360, per=5.28%, avg=773.70, stdev=188.64, samples=20 00:20:37.844 iops : min= 120, max= 340, avg=193.35, stdev=47.16, samples=20 00:20:37.844 lat (msec) : 4=0.82%, 10=2.21%, 20=0.26%, 50=2.26%, 100=72.62% 00:20:37.844 lat (msec) : 250=21.85% 00:20:37.844 cpu : usr=42.21%, sys=1.74%, ctx=1314, majf=0, minf=9 00:20:37.844 IO depths : 1=0.5%, 2=1.1%, 4=6.9%, 8=78.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:20:37.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 complete : 0=0.0%, 4=89.2%, 8=6.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 issued rwts: total=1950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.844 filename0: (groupid=0, jobs=1): err= 0: pid=91987: Tue May 14 23:07:48 2024 00:20:37.844 read: IOPS=139, BW=557KiB/s (570kB/s)(5592KiB/10046msec) 00:20:37.844 slat (usec): min=3, max=3036, avg=28.19, stdev=89.86 00:20:37.844 clat (msec): min=45, max=243, avg=114.77, stdev=33.60 00:20:37.844 lat (msec): min=45, max=243, avg=114.80, stdev=33.59 00:20:37.844 clat percentiles (msec): 00:20:37.844 | 1.00th=[ 57], 5.00th=[ 68], 10.00th=[ 72], 20.00th=[ 92], 00:20:37.844 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 114], 00:20:37.844 | 70.00th=[ 126], 80.00th=[ 144], 90.00th=[ 161], 95.00th=[ 174], 00:20:37.844 | 99.00th=[ 211], 99.50th=[ 226], 99.90th=[ 245], 99.95th=[ 245], 00:20:37.844 | 99.99th=[ 245] 00:20:37.844 bw ( KiB/s): min= 344, max= 896, per=3.77%, avg=552.75, stdev=114.60, samples=20 00:20:37.844 iops : min= 86, max= 224, avg=138.15, stdev=28.71, samples=20 00:20:37.844 lat (msec) : 50=0.64%, 100=35.98%, 250=63.38% 00:20:37.844 cpu : usr=35.20%, sys=1.25%, ctx=981, majf=0, minf=9 00:20:37.844 IO depths : 1=2.2%, 2=4.7%, 4=13.3%, 8=68.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:20:37.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.844 issued rwts: total=1398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.844 filename0: (groupid=0, jobs=1): err= 0: pid=91988: Tue May 14 23:07:48 2024 00:20:37.844 read: IOPS=140, BW=560KiB/s (574kB/s)(5616KiB/10022msec) 00:20:37.845 slat (usec): min=10, max=8056, avg=63.51, stdev=492.44 00:20:37.845 clat (msec): min=48, max=230, avg=113.75, stdev=32.35 00:20:37.845 lat (msec): min=48, max=230, avg=113.81, stdev=32.36 00:20:37.845 clat percentiles (msec): 00:20:37.845 | 1.00th=[ 60], 5.00th=[ 65], 10.00th=[ 72], 20.00th=[ 85], 00:20:37.845 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 120], 00:20:37.845 | 70.00th=[ 132], 80.00th=[ 144], 90.00th=[ 157], 95.00th=[ 167], 00:20:37.845 | 99.00th=[ 203], 99.50th=[ 215], 99.90th=[ 230], 99.95th=[ 230], 00:20:37.845 | 99.99th=[ 230] 00:20:37.845 bw ( KiB/s): min= 312, max= 896, per=3.79%, avg=555.20, stdev=125.75, samples=20 00:20:37.845 iops : min= 78, max= 224, avg=138.80, stdev=31.44, samples=20 00:20:37.845 lat (msec) : 50=0.36%, 100=37.96%, 250=61.68% 00:20:37.845 cpu : usr=34.13%, sys=1.30%, ctx=885, majf=0, minf=9 00:20:37.845 IO depths : 1=2.6%, 2=6.1%, 4=16.7%, 8=64.6%, 16=10.0%, 32=0.0%, >=64=0.0% 00:20:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 issued rwts: total=1404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.845 filename1: (groupid=0, jobs=1): err= 0: pid=91989: Tue May 14 23:07:48 2024 00:20:37.845 read: IOPS=130, BW=523KiB/s (536kB/s)(5248KiB/10027msec) 00:20:37.845 slat (usec): min=4, max=8048, avg=46.56, stdev=367.61 00:20:37.845 clat (msec): min=61, max=239, avg=121.97, stdev=29.16 00:20:37.845 lat (msec): min=61, max=239, avg=122.01, stdev=29.17 00:20:37.845 clat percentiles (msec): 00:20:37.845 | 1.00th=[ 71], 5.00th=[ 81], 10.00th=[ 94], 20.00th=[ 100], 00:20:37.845 | 30.00th=[ 102], 40.00th=[ 107], 50.00th=[ 112], 60.00th=[ 122], 00:20:37.845 | 70.00th=[ 142], 80.00th=[ 150], 90.00th=[ 157], 95.00th=[ 169], 00:20:37.845 | 99.00th=[ 207], 99.50th=[ 228], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.845 | 99.99th=[ 241] 00:20:37.845 bw ( KiB/s): min= 383, max= 640, per=3.54%, avg=518.35, stdev=97.24, samples=20 00:20:37.845 iops : min= 95, max= 160, avg=129.55, stdev=24.37, samples=20 00:20:37.845 lat (msec) : 100=24.39%, 250=75.61% 00:20:37.845 cpu : usr=38.41%, sys=1.63%, ctx=1169, majf=0, minf=9 00:20:37.845 IO depths : 1=3.8%, 2=8.6%, 4=21.0%, 8=57.9%, 16=8.7%, 32=0.0%, >=64=0.0% 00:20:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 complete : 0=0.0%, 4=92.9%, 8=1.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.845 filename1: (groupid=0, jobs=1): err= 0: pid=91990: Tue May 14 23:07:48 2024 00:20:37.845 read: IOPS=157, BW=629KiB/s (644kB/s)(6336KiB/10067msec) 00:20:37.845 slat (usec): min=8, max=4041, avg=21.12, stdev=143.22 00:20:37.845 clat (msec): min=31, max=259, avg=101.31, stdev=33.70 00:20:37.845 lat (msec): min=31, max=259, avg=101.33, stdev=33.71 00:20:37.845 clat percentiles (msec): 00:20:37.845 | 1.00th=[ 33], 5.00th=[ 54], 10.00th=[ 66], 20.00th=[ 72], 00:20:37.845 | 30.00th=[ 81], 40.00th=[ 94], 50.00th=[ 102], 60.00th=[ 105], 00:20:37.845 | 70.00th=[ 111], 80.00th=[ 123], 90.00th=[ 148], 95.00th=[ 167], 00:20:37.845 | 99.00th=[ 209], 99.50th=[ 209], 99.90th=[ 259], 99.95th=[ 259], 00:20:37.845 | 99.99th=[ 259] 00:20:37.845 bw ( KiB/s): min= 303, max= 896, per=4.29%, avg=628.55, stdev=141.15, samples=20 00:20:37.845 iops : min= 75, max= 224, avg=157.10, stdev=35.38, samples=20 00:20:37.845 lat (msec) : 50=4.80%, 100=42.74%, 250=52.15%, 500=0.32% 00:20:37.845 cpu : usr=40.72%, sys=1.53%, ctx=1289, majf=0, minf=9 00:20:37.845 IO depths : 1=2.6%, 2=5.6%, 4=14.6%, 8=66.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:20:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.845 filename1: (groupid=0, jobs=1): err= 0: pid=91991: Tue May 14 23:07:48 2024 00:20:37.845 read: IOPS=155, BW=623KiB/s (638kB/s)(6264KiB/10058msec) 00:20:37.845 slat (usec): min=7, max=8053, avg=41.61, stdev=337.07 00:20:37.845 clat (msec): min=35, max=191, avg=102.50, stdev=32.54 00:20:37.845 lat (msec): min=35, max=191, avg=102.55, stdev=32.55 00:20:37.845 clat percentiles (msec): 00:20:37.845 | 1.00th=[ 36], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 72], 00:20:37.845 | 30.00th=[ 84], 40.00th=[ 95], 50.00th=[ 101], 60.00th=[ 108], 00:20:37.845 | 70.00th=[ 118], 80.00th=[ 132], 90.00th=[ 150], 95.00th=[ 157], 00:20:37.845 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:20:37.845 | 99.99th=[ 192] 00:20:37.845 bw ( KiB/s): min= 383, max= 1024, per=4.23%, avg=619.75, stdev=150.74, samples=20 00:20:37.845 iops : min= 95, max= 256, avg=154.90, stdev=37.75, samples=20 00:20:37.845 lat (msec) : 50=4.85%, 100=45.21%, 250=49.94% 00:20:37.845 cpu : usr=37.26%, sys=1.48%, ctx=1318, majf=0, minf=9 00:20:37.845 IO depths : 1=1.8%, 2=3.6%, 4=11.5%, 8=71.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:20:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 complete : 0=0.0%, 4=90.0%, 8=5.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.845 filename1: (groupid=0, jobs=1): err= 0: pid=91992: Tue May 14 23:07:48 2024 00:20:37.845 read: IOPS=135, BW=542KiB/s (555kB/s)(5440KiB/10040msec) 00:20:37.845 slat (usec): min=3, max=8068, avg=57.59, stdev=486.13 00:20:37.845 clat (msec): min=59, max=240, avg=117.73, stdev=35.09 00:20:37.845 lat (msec): min=59, max=240, avg=117.79, stdev=35.08 00:20:37.845 clat percentiles (msec): 00:20:37.845 | 1.00th=[ 60], 5.00th=[ 64], 10.00th=[ 72], 20.00th=[ 94], 00:20:37.845 | 30.00th=[ 96], 40.00th=[ 108], 50.00th=[ 109], 60.00th=[ 121], 00:20:37.845 | 70.00th=[ 140], 80.00th=[ 146], 90.00th=[ 161], 95.00th=[ 180], 00:20:37.845 | 99.00th=[ 207], 99.50th=[ 215], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.845 | 99.99th=[ 241] 00:20:37.845 bw ( KiB/s): min= 344, max= 720, per=3.67%, avg=537.60, stdev=113.01, samples=20 00:20:37.845 iops : min= 86, max= 180, avg=134.40, stdev=28.25, samples=20 00:20:37.845 lat (msec) : 100=34.85%, 250=65.15% 00:20:37.845 cpu : usr=32.51%, sys=1.20%, ctx=882, majf=0, minf=9 00:20:37.845 IO depths : 1=2.9%, 2=6.5%, 4=17.2%, 8=63.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:20:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 issued rwts: total=1360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.845 filename1: (groupid=0, jobs=1): err= 0: pid=91993: Tue May 14 23:07:48 2024 00:20:37.845 read: IOPS=158, BW=634KiB/s (649kB/s)(6384KiB/10066msec) 00:20:37.845 slat (usec): min=9, max=8044, avg=33.87, stdev=224.82 00:20:37.845 clat (msec): min=39, max=269, avg=100.69, stdev=34.57 00:20:37.845 lat (msec): min=39, max=269, avg=100.72, stdev=34.57 00:20:37.845 clat percentiles (msec): 00:20:37.845 | 1.00th=[ 41], 5.00th=[ 56], 10.00th=[ 63], 20.00th=[ 71], 00:20:37.845 | 30.00th=[ 75], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 108], 00:20:37.845 | 70.00th=[ 112], 80.00th=[ 129], 90.00th=[ 148], 95.00th=[ 163], 00:20:37.845 | 99.00th=[ 194], 99.50th=[ 213], 99.90th=[ 271], 99.95th=[ 271], 00:20:37.845 | 99.99th=[ 271] 00:20:37.845 bw ( KiB/s): min= 376, max= 912, per=4.31%, avg=631.80, stdev=160.64, samples=20 00:20:37.845 iops : min= 94, max= 228, avg=157.90, stdev=40.21, samples=20 00:20:37.845 lat (msec) : 50=2.07%, 100=50.25%, 250=47.37%, 500=0.31% 00:20:37.845 cpu : usr=38.05%, sys=1.35%, ctx=1059, majf=0, minf=9 00:20:37.845 IO depths : 1=1.4%, 2=2.9%, 4=10.4%, 8=73.7%, 16=11.6%, 32=0.0%, >=64=0.0% 00:20:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 complete : 0=0.0%, 4=89.9%, 8=5.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 issued rwts: total=1596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.845 filename1: (groupid=0, jobs=1): err= 0: pid=91994: Tue May 14 23:07:48 2024 00:20:37.845 read: IOPS=134, BW=538KiB/s (551kB/s)(5400KiB/10038msec) 00:20:37.845 slat (usec): min=4, max=8058, avg=64.71, stdev=517.70 00:20:37.845 clat (msec): min=60, max=241, avg=118.48, stdev=31.81 00:20:37.845 lat (msec): min=60, max=241, avg=118.54, stdev=31.79 00:20:37.845 clat percentiles (msec): 00:20:37.845 | 1.00th=[ 64], 5.00th=[ 73], 10.00th=[ 86], 20.00th=[ 96], 00:20:37.845 | 30.00th=[ 101], 40.00th=[ 108], 50.00th=[ 109], 60.00th=[ 121], 00:20:37.845 | 70.00th=[ 124], 80.00th=[ 144], 90.00th=[ 157], 95.00th=[ 180], 00:20:37.845 | 99.00th=[ 228], 99.50th=[ 230], 99.90th=[ 241], 99.95th=[ 241], 00:20:37.845 | 99.99th=[ 241] 00:20:37.845 bw ( KiB/s): min= 383, max= 688, per=3.64%, avg=533.00, stdev=90.17, samples=20 00:20:37.845 iops : min= 95, max= 172, avg=133.15, stdev=22.65, samples=20 00:20:37.845 lat (msec) : 100=30.22%, 250=69.78% 00:20:37.845 cpu : usr=31.65%, sys=1.38%, ctx=875, majf=0, minf=9 00:20:37.845 IO depths : 1=3.6%, 2=7.6%, 4=18.4%, 8=61.4%, 16=9.0%, 32=0.0%, >=64=0.0% 00:20:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.845 filename1: (groupid=0, jobs=1): err= 0: pid=91995: Tue May 14 23:07:48 2024 00:20:37.845 read: IOPS=131, BW=526KiB/s (539kB/s)(5272KiB/10019msec) 00:20:37.845 slat (usec): min=9, max=8060, avg=71.35, stdev=573.29 00:20:37.845 clat (msec): min=51, max=263, avg=121.11, stdev=35.14 00:20:37.845 lat (msec): min=51, max=263, avg=121.18, stdev=35.15 00:20:37.845 clat percentiles (msec): 00:20:37.845 | 1.00th=[ 61], 5.00th=[ 72], 10.00th=[ 83], 20.00th=[ 96], 00:20:37.845 | 30.00th=[ 100], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 121], 00:20:37.845 | 70.00th=[ 132], 80.00th=[ 146], 90.00th=[ 169], 95.00th=[ 190], 00:20:37.845 | 99.00th=[ 218], 99.50th=[ 251], 99.90th=[ 264], 99.95th=[ 264], 00:20:37.845 | 99.99th=[ 264] 00:20:37.845 bw ( KiB/s): min= 272, max= 680, per=3.55%, avg=520.80, stdev=104.66, samples=20 00:20:37.845 iops : min= 68, max= 170, avg=130.20, stdev=26.16, samples=20 00:20:37.845 lat (msec) : 100=33.00%, 250=66.62%, 500=0.38% 00:20:37.845 cpu : usr=31.63%, sys=1.19%, ctx=854, majf=0, minf=9 00:20:37.845 IO depths : 1=3.3%, 2=7.1%, 4=17.4%, 8=62.9%, 16=9.3%, 32=0.0%, >=64=0.0% 00:20:37.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 complete : 0=0.0%, 4=92.0%, 8=2.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.845 issued rwts: total=1318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.846 filename1: (groupid=0, jobs=1): err= 0: pid=91996: Tue May 14 23:07:48 2024 00:20:37.846 read: IOPS=133, BW=535KiB/s (547kB/s)(5360KiB/10027msec) 00:20:37.846 slat (usec): min=4, max=8049, avg=42.83, stdev=328.65 00:20:37.846 clat (msec): min=37, max=210, avg=119.38, stdev=29.25 00:20:37.846 lat (msec): min=37, max=210, avg=119.42, stdev=29.25 00:20:37.846 clat percentiles (msec): 00:20:37.846 | 1.00th=[ 61], 5.00th=[ 72], 10.00th=[ 87], 20.00th=[ 96], 00:20:37.846 | 30.00th=[ 106], 40.00th=[ 108], 50.00th=[ 112], 60.00th=[ 122], 00:20:37.846 | 70.00th=[ 133], 80.00th=[ 146], 90.00th=[ 157], 95.00th=[ 163], 00:20:37.846 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 211], 99.95th=[ 211], 00:20:37.846 | 99.99th=[ 211] 00:20:37.846 bw ( KiB/s): min= 383, max= 656, per=3.61%, avg=528.95, stdev=75.47, samples=20 00:20:37.846 iops : min= 95, max= 164, avg=132.10, stdev=18.98, samples=20 00:20:37.846 lat (msec) : 50=0.90%, 100=23.28%, 250=75.82% 00:20:37.846 cpu : usr=34.94%, sys=1.40%, ctx=951, majf=0, minf=9 00:20:37.846 IO depths : 1=3.8%, 2=8.4%, 4=19.9%, 8=59.2%, 16=8.7%, 32=0.0%, >=64=0.0% 00:20:37.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 issued rwts: total=1340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.846 filename2: (groupid=0, jobs=1): err= 0: pid=91997: Tue May 14 23:07:48 2024 00:20:37.846 read: IOPS=180, BW=722KiB/s (739kB/s)(7280KiB/10081msec) 00:20:37.846 slat (usec): min=4, max=5110, avg=31.79, stdev=214.34 00:20:37.846 clat (msec): min=8, max=181, avg=88.36, stdev=28.07 00:20:37.846 lat (msec): min=8, max=181, avg=88.39, stdev=28.06 00:20:37.846 clat percentiles (msec): 00:20:37.846 | 1.00th=[ 12], 5.00th=[ 53], 10.00th=[ 58], 20.00th=[ 67], 00:20:37.846 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 92], 00:20:37.846 | 70.00th=[ 100], 80.00th=[ 111], 90.00th=[ 126], 95.00th=[ 144], 00:20:37.846 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 182], 00:20:37.846 | 99.99th=[ 182] 00:20:37.846 bw ( KiB/s): min= 460, max= 1017, per=4.92%, avg=720.40, stdev=139.31, samples=20 00:20:37.846 iops : min= 115, max= 254, avg=180.05, stdev=34.80, samples=20 00:20:37.846 lat (msec) : 10=0.88%, 20=0.88%, 50=2.36%, 100=67.31%, 250=28.57% 00:20:37.846 cpu : usr=40.50%, sys=1.59%, ctx=1321, majf=0, minf=9 00:20:37.846 IO depths : 1=1.3%, 2=2.5%, 4=9.3%, 8=74.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:20:37.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.846 filename2: (groupid=0, jobs=1): err= 0: pid=91998: Tue May 14 23:07:48 2024 00:20:37.846 read: IOPS=182, BW=729KiB/s (746kB/s)(7344KiB/10076msec) 00:20:37.846 slat (usec): min=3, max=8050, avg=28.24, stdev=289.29 00:20:37.846 clat (msec): min=3, max=195, avg=87.44, stdev=31.14 00:20:37.846 lat (msec): min=3, max=195, avg=87.47, stdev=31.14 00:20:37.846 clat percentiles (msec): 00:20:37.846 | 1.00th=[ 9], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 67], 00:20:37.846 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 92], 00:20:37.846 | 70.00th=[ 101], 80.00th=[ 112], 90.00th=[ 126], 95.00th=[ 148], 00:20:37.846 | 99.00th=[ 174], 99.50th=[ 197], 99.90th=[ 197], 99.95th=[ 197], 00:20:37.846 | 99.99th=[ 197] 00:20:37.846 bw ( KiB/s): min= 360, max= 1232, per=4.98%, avg=728.25, stdev=178.51, samples=20 00:20:37.846 iops : min= 90, max= 308, avg=182.00, stdev=44.62, samples=20 00:20:37.846 lat (msec) : 4=0.87%, 10=0.87%, 20=0.87%, 50=3.38%, 100=64.16% 00:20:37.846 lat (msec) : 250=29.85% 00:20:37.846 cpu : usr=41.75%, sys=1.57%, ctx=1438, majf=0, minf=9 00:20:37.846 IO depths : 1=0.4%, 2=1.0%, 4=7.0%, 8=78.2%, 16=13.4%, 32=0.0%, >=64=0.0% 00:20:37.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 issued rwts: total=1836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.846 filename2: (groupid=0, jobs=1): err= 0: pid=91999: Tue May 14 23:07:48 2024 00:20:37.846 read: IOPS=164, BW=659KiB/s (675kB/s)(6632KiB/10068msec) 00:20:37.846 slat (usec): min=7, max=1246, avg=18.25, stdev=32.01 00:20:37.846 clat (msec): min=38, max=250, avg=96.85, stdev=31.04 00:20:37.846 lat (msec): min=38, max=250, avg=96.87, stdev=31.04 00:20:37.846 clat percentiles (msec): 00:20:37.846 | 1.00th=[ 47], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 70], 00:20:37.846 | 30.00th=[ 80], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 100], 00:20:37.846 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 144], 95.00th=[ 157], 00:20:37.846 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 251], 99.95th=[ 251], 00:20:37.846 | 99.99th=[ 251] 00:20:37.846 bw ( KiB/s): min= 383, max= 944, per=4.48%, avg=656.50, stdev=149.48, samples=20 00:20:37.846 iops : min= 95, max= 236, avg=164.05, stdev=37.44, samples=20 00:20:37.846 lat (msec) : 50=2.23%, 100=58.14%, 250=39.32%, 500=0.30% 00:20:37.846 cpu : usr=37.24%, sys=1.57%, ctx=1131, majf=0, minf=9 00:20:37.846 IO depths : 1=1.4%, 2=3.1%, 4=10.9%, 8=72.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:20:37.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 complete : 0=0.0%, 4=90.3%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 issued rwts: total=1658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.846 filename2: (groupid=0, jobs=1): err= 0: pid=92000: Tue May 14 23:07:48 2024 00:20:37.846 read: IOPS=153, BW=614KiB/s (629kB/s)(6180KiB/10068msec) 00:20:37.846 slat (usec): min=4, max=8048, avg=42.95, stdev=339.05 00:20:37.846 clat (msec): min=33, max=203, avg=103.81, stdev=29.15 00:20:37.846 lat (msec): min=33, max=203, avg=103.85, stdev=29.15 00:20:37.846 clat percentiles (msec): 00:20:37.846 | 1.00th=[ 48], 5.00th=[ 64], 10.00th=[ 70], 20.00th=[ 81], 00:20:37.846 | 30.00th=[ 90], 40.00th=[ 96], 50.00th=[ 102], 60.00th=[ 107], 00:20:37.846 | 70.00th=[ 111], 80.00th=[ 125], 90.00th=[ 144], 95.00th=[ 155], 00:20:37.846 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:20:37.846 | 99.99th=[ 205] 00:20:37.846 bw ( KiB/s): min= 384, max= 824, per=4.18%, avg=611.25, stdev=103.89, samples=20 00:20:37.846 iops : min= 96, max= 206, avg=152.75, stdev=25.97, samples=20 00:20:37.846 lat (msec) : 50=3.17%, 100=44.79%, 250=52.04% 00:20:37.846 cpu : usr=32.37%, sys=1.15%, ctx=915, majf=0, minf=9 00:20:37.846 IO depths : 1=1.9%, 2=4.2%, 4=12.4%, 8=70.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:20:37.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 issued rwts: total=1545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.846 filename2: (groupid=0, jobs=1): err= 0: pid=92001: Tue May 14 23:07:48 2024 00:20:37.846 read: IOPS=166, BW=666KiB/s (682kB/s)(6704KiB/10068msec) 00:20:37.846 slat (usec): min=8, max=5052, avg=32.44, stdev=214.69 00:20:37.846 clat (msec): min=47, max=209, avg=95.70, stdev=28.45 00:20:37.846 lat (msec): min=47, max=209, avg=95.73, stdev=28.46 00:20:37.846 clat percentiles (msec): 00:20:37.846 | 1.00th=[ 52], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 70], 00:20:37.846 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 94], 60.00th=[ 101], 00:20:37.846 | 70.00th=[ 106], 80.00th=[ 115], 90.00th=[ 138], 95.00th=[ 150], 00:20:37.846 | 99.00th=[ 171], 99.50th=[ 197], 99.90th=[ 209], 99.95th=[ 209], 00:20:37.846 | 99.99th=[ 209] 00:20:37.846 bw ( KiB/s): min= 463, max= 928, per=4.53%, avg=663.75, stdev=120.40, samples=20 00:20:37.846 iops : min= 115, max= 232, avg=165.90, stdev=30.17, samples=20 00:20:37.846 lat (msec) : 50=0.36%, 100=60.86%, 250=38.78% 00:20:37.846 cpu : usr=40.92%, sys=1.62%, ctx=1465, majf=0, minf=9 00:20:37.846 IO depths : 1=1.6%, 2=3.3%, 4=11.0%, 8=72.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:20:37.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 complete : 0=0.0%, 4=90.1%, 8=4.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 issued rwts: total=1676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.846 filename2: (groupid=0, jobs=1): err= 0: pid=92002: Tue May 14 23:07:48 2024 00:20:37.846 read: IOPS=176, BW=706KiB/s (723kB/s)(7108KiB/10066msec) 00:20:37.846 slat (usec): min=4, max=6320, avg=22.79, stdev=201.94 00:20:37.846 clat (msec): min=30, max=177, avg=90.46, stdev=25.86 00:20:37.846 lat (msec): min=30, max=177, avg=90.48, stdev=25.86 00:20:37.846 clat percentiles (msec): 00:20:37.846 | 1.00th=[ 44], 5.00th=[ 53], 10.00th=[ 63], 20.00th=[ 70], 00:20:37.846 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 96], 00:20:37.846 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 126], 95.00th=[ 138], 00:20:37.846 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 178], 00:20:37.846 | 99.99th=[ 178] 00:20:37.846 bw ( KiB/s): min= 480, max= 950, per=4.81%, avg=704.10, stdev=127.64, samples=20 00:20:37.846 iops : min= 120, max= 237, avg=175.95, stdev=31.86, samples=20 00:20:37.846 lat (msec) : 50=4.61%, 100=61.51%, 250=33.88% 00:20:37.846 cpu : usr=44.01%, sys=1.57%, ctx=1299, majf=0, minf=9 00:20:37.846 IO depths : 1=1.9%, 2=3.9%, 4=10.9%, 8=71.9%, 16=11.4%, 32=0.0%, >=64=0.0% 00:20:37.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.846 issued rwts: total=1777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.846 filename2: (groupid=0, jobs=1): err= 0: pid=92003: Tue May 14 23:07:48 2024 00:20:37.846 read: IOPS=148, BW=596KiB/s (610kB/s)(5996KiB/10068msec) 00:20:37.846 slat (usec): min=6, max=8052, avg=43.77, stdev=358.68 00:20:37.846 clat (msec): min=41, max=263, avg=106.98, stdev=35.43 00:20:37.846 lat (msec): min=41, max=263, avg=107.03, stdev=35.44 00:20:37.846 clat percentiles (msec): 00:20:37.846 | 1.00th=[ 42], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 79], 00:20:37.846 | 30.00th=[ 91], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 108], 00:20:37.846 | 70.00th=[ 120], 80.00th=[ 134], 90.00th=[ 153], 95.00th=[ 167], 00:20:37.846 | 99.00th=[ 215], 99.50th=[ 241], 99.90th=[ 264], 99.95th=[ 264], 00:20:37.846 | 99.99th=[ 264] 00:20:37.846 bw ( KiB/s): min= 383, max= 896, per=4.05%, avg=592.95, stdev=127.62, samples=20 00:20:37.846 iops : min= 95, max= 224, avg=148.20, stdev=31.97, samples=20 00:20:37.847 lat (msec) : 50=2.40%, 100=45.50%, 250=51.77%, 500=0.33% 00:20:37.847 cpu : usr=32.46%, sys=1.31%, ctx=883, majf=0, minf=9 00:20:37.847 IO depths : 1=1.7%, 2=4.0%, 4=12.5%, 8=70.0%, 16=11.8%, 32=0.0%, >=64=0.0% 00:20:37.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.847 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.847 issued rwts: total=1499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.847 filename2: (groupid=0, jobs=1): err= 0: pid=92004: Tue May 14 23:07:48 2024 00:20:37.847 read: IOPS=181, BW=726KiB/s (744kB/s)(7320KiB/10079msec) 00:20:37.847 slat (usec): min=4, max=4295, avg=36.03, stdev=228.14 00:20:37.847 clat (msec): min=6, max=195, avg=87.80, stdev=26.71 00:20:37.847 lat (msec): min=6, max=195, avg=87.84, stdev=26.73 00:20:37.847 clat percentiles (msec): 00:20:37.847 | 1.00th=[ 12], 5.00th=[ 48], 10.00th=[ 62], 20.00th=[ 68], 00:20:37.847 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 95], 00:20:37.847 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 132], 00:20:37.847 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 194], 99.95th=[ 194], 00:20:37.847 | 99.99th=[ 194] 00:20:37.847 bw ( KiB/s): min= 432, max= 1149, per=4.95%, avg=724.60, stdev=168.12, samples=20 00:20:37.847 iops : min= 108, max= 287, avg=181.05, stdev=42.06, samples=20 00:20:37.847 lat (msec) : 10=0.87%, 20=0.87%, 50=3.33%, 100=64.26%, 250=30.66% 00:20:37.847 cpu : usr=44.39%, sys=1.66%, ctx=1162, majf=0, minf=9 00:20:37.847 IO depths : 1=1.7%, 2=3.7%, 4=11.6%, 8=71.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:20:37.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.847 complete : 0=0.0%, 4=90.3%, 8=4.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.847 issued rwts: total=1830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:37.847 00:20:37.847 Run status group 0 (all jobs): 00:20:37.847 READ: bw=14.3MiB/s (15.0MB/s), 523KiB/s-774KiB/s (536kB/s-793kB/s), io=144MiB (151MB), run=10019-10081msec 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 bdev_null0 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 [2024-05-14 23:07:49.112833] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 bdev_null1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.847 { 00:20:37.847 "params": { 00:20:37.847 "name": "Nvme$subsystem", 00:20:37.847 "trtype": "$TEST_TRANSPORT", 00:20:37.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.847 "adrfam": "ipv4", 00:20:37.847 "trsvcid": "$NVMF_PORT", 00:20:37.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.847 "hdgst": ${hdgst:-false}, 00:20:37.847 "ddgst": ${ddgst:-false} 00:20:37.847 }, 00:20:37.847 "method": "bdev_nvme_attach_controller" 00:20:37.847 } 00:20:37.847 EOF 00:20:37.847 )") 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.847 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.848 { 00:20:37.848 "params": { 00:20:37.848 "name": "Nvme$subsystem", 00:20:37.848 "trtype": "$TEST_TRANSPORT", 00:20:37.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.848 "adrfam": "ipv4", 00:20:37.848 "trsvcid": "$NVMF_PORT", 00:20:37.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.848 "hdgst": ${hdgst:-false}, 00:20:37.848 "ddgst": ${ddgst:-false} 00:20:37.848 }, 00:20:37.848 "method": "bdev_nvme_attach_controller" 00:20:37.848 } 00:20:37.848 EOF 00:20:37.848 )") 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:37.848 "params": { 00:20:37.848 "name": "Nvme0", 00:20:37.848 "trtype": "tcp", 00:20:37.848 "traddr": "10.0.0.2", 00:20:37.848 "adrfam": "ipv4", 00:20:37.848 "trsvcid": "4420", 00:20:37.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:37.848 "hdgst": false, 00:20:37.848 "ddgst": false 00:20:37.848 }, 00:20:37.848 "method": "bdev_nvme_attach_controller" 00:20:37.848 },{ 00:20:37.848 "params": { 00:20:37.848 "name": "Nvme1", 00:20:37.848 "trtype": "tcp", 00:20:37.848 "traddr": "10.0.0.2", 00:20:37.848 "adrfam": "ipv4", 00:20:37.848 "trsvcid": "4420", 00:20:37.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.848 "hdgst": false, 00:20:37.848 "ddgst": false 00:20:37.848 }, 00:20:37.848 "method": "bdev_nvme_attach_controller" 00:20:37.848 }' 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:37.848 23:07:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.848 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:37.848 ... 00:20:37.848 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:37.848 ... 00:20:37.848 fio-3.35 00:20:37.848 Starting 4 threads 00:20:43.171 00:20:43.171 filename0: (groupid=0, jobs=1): err= 0: pid=92126: Tue May 14 23:07:54 2024 00:20:43.171 read: IOPS=980, BW=7845KiB/s (8033kB/s)(38.4MiB/5009msec) 00:20:43.171 slat (nsec): min=4381, max=53957, avg=19251.25, stdev=5622.23 00:20:43.171 clat (usec): min=3145, max=15955, avg=8060.29, stdev=3574.34 00:20:43.171 lat (usec): min=3162, max=15976, avg=8079.54, stdev=3575.30 00:20:43.171 clat percentiles (usec): 00:20:43.171 | 1.00th=[ 4080], 5.00th=[ 4178], 10.00th=[ 4293], 20.00th=[ 4883], 00:20:43.171 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 7439], 00:20:43.171 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:43.171 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13042], 99.95th=[13042], 00:20:43.171 | 99.99th=[15926] 00:20:43.171 bw ( KiB/s): min= 4992, max=13157, per=25.02%, avg=7843.70, stdev=3584.14, samples=10 00:20:43.171 iops : min= 624, max= 1644, avg=980.40, stdev=447.91, samples=10 00:20:43.171 lat (msec) : 4=0.20%, 10=60.73%, 20=39.07% 00:20:43.171 cpu : usr=92.53%, sys=5.75%, ctx=4, majf=0, minf=9 00:20:43.171 IO depths : 1=10.5%, 2=25.0%, 4=50.0%, 8=14.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:43.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.171 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.171 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:43.171 filename0: (groupid=0, jobs=1): err= 0: pid=92127: Tue May 14 23:07:54 2024 00:20:43.171 read: IOPS=979, BW=7832KiB/s (8020kB/s)(38.2MiB/5001msec) 00:20:43.171 slat (usec): min=4, max=102, avg=17.17, stdev=10.35 00:20:43.171 clat (usec): min=4007, max=13518, avg=8075.53, stdev=3570.09 00:20:43.171 lat (usec): min=4024, max=13527, avg=8092.70, stdev=3569.17 00:20:43.171 clat percentiles (usec): 00:20:43.171 | 1.00th=[ 4113], 5.00th=[ 4178], 10.00th=[ 4293], 20.00th=[ 4948], 00:20:43.171 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 7439], 00:20:43.171 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:43.171 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13304], 99.95th=[13435], 00:20:43.171 | 99.99th=[13566] 00:20:43.171 bw ( KiB/s): min= 4992, max=12978, per=25.92%, avg=8126.44, stdev=3633.02, samples=9 00:20:43.171 iops : min= 624, max= 1622, avg=1015.78, stdev=454.09, samples=9 00:20:43.171 lat (msec) : 10=60.78%, 20=39.22% 00:20:43.171 cpu : usr=92.46%, sys=5.44%, ctx=123, majf=0, minf=9 00:20:43.171 IO depths : 1=10.8%, 2=25.0%, 4=50.0%, 8=14.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:43.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.171 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.171 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:43.171 filename1: (groupid=0, jobs=1): err= 0: pid=92128: Tue May 14 23:07:54 2024 00:20:43.171 read: IOPS=980, BW=7842KiB/s (8030kB/s)(38.4MiB/5009msec) 00:20:43.171 slat (nsec): min=4595, max=59183, avg=18432.97, stdev=9130.10 00:20:43.171 clat (usec): min=3287, max=15744, avg=8063.20, stdev=3567.78 00:20:43.171 lat (usec): min=3295, max=15766, avg=8081.63, stdev=3572.98 00:20:43.171 clat percentiles (usec): 00:20:43.171 | 1.00th=[ 4178], 5.00th=[ 4228], 10.00th=[ 4293], 20.00th=[ 4948], 00:20:43.171 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 7439], 00:20:43.171 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:43.171 | 99.00th=[12911], 99.50th=[13042], 99.90th=[15533], 99.95th=[15533], 00:20:43.171 | 99.99th=[15795] 00:20:43.171 bw ( KiB/s): min= 4992, max=13157, per=25.02%, avg=7843.70, stdev=3584.14, samples=10 00:20:43.171 iops : min= 624, max= 1644, avg=980.40, stdev=447.91, samples=10 00:20:43.171 lat (msec) : 4=0.16%, 10=61.12%, 20=38.72% 00:20:43.171 cpu : usr=92.53%, sys=5.91%, ctx=5, majf=0, minf=0 00:20:43.171 IO depths : 1=9.7%, 2=25.0%, 4=50.0%, 8=15.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:43.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.171 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.171 issued rwts: total=4910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:43.171 filename1: (groupid=0, jobs=1): err= 0: pid=92129: Tue May 14 23:07:54 2024 00:20:43.171 read: IOPS=980, BW=7845KiB/s (8033kB/s)(38.4MiB/5009msec) 00:20:43.171 slat (nsec): min=4889, max=56229, avg=19256.57, stdev=6154.88 00:20:43.171 clat (usec): min=2296, max=15783, avg=8053.10, stdev=3585.56 00:20:43.171 lat (usec): min=2327, max=15806, avg=8072.35, stdev=3585.86 00:20:43.171 clat percentiles (usec): 00:20:43.171 | 1.00th=[ 4080], 5.00th=[ 4178], 10.00th=[ 4293], 20.00th=[ 4883], 00:20:43.171 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5735], 60.00th=[ 7439], 00:20:43.171 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:20:43.171 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13173], 99.95th=[15401], 00:20:43.171 | 99.99th=[15795] 00:20:43.171 bw ( KiB/s): min= 4992, max=13184, per=25.03%, avg=7846.40, stdev=3588.59, samples=10 00:20:43.171 iops : min= 624, max= 1648, avg=980.80, stdev=448.57, samples=10 00:20:43.171 lat (msec) : 4=0.35%, 10=60.59%, 20=39.07% 00:20:43.171 cpu : usr=93.15%, sys=4.93%, ctx=6, majf=0, minf=9 00:20:43.171 IO depths : 1=11.9%, 2=25.0%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:43.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.172 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.172 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.172 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:43.172 00:20:43.172 Run status group 0 (all jobs): 00:20:43.172 READ: bw=30.6MiB/s (32.1MB/s), 7832KiB/s-7845KiB/s (8020kB/s-8033kB/s), io=153MiB (161MB), run=5001-5009msec 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.172 ************************************ 00:20:43.172 END TEST fio_dif_rand_params 00:20:43.172 ************************************ 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.172 00:20:43.172 real 0m23.479s 00:20:43.172 user 2m4.276s 00:20:43.172 sys 0m6.396s 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:43.172 23:07:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:43.172 23:07:55 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:43.172 23:07:55 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:43.172 23:07:55 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:43.172 23:07:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:43.172 ************************************ 00:20:43.172 START TEST fio_dif_digest 00:20:43.172 ************************************ 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:43.172 bdev_null0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:43.172 [2024-05-14 23:07:55.266229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:43.172 { 00:20:43.172 "params": { 00:20:43.172 "name": "Nvme$subsystem", 00:20:43.172 "trtype": "$TEST_TRANSPORT", 00:20:43.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.172 "adrfam": "ipv4", 00:20:43.172 "trsvcid": "$NVMF_PORT", 00:20:43.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.172 "hdgst": ${hdgst:-false}, 00:20:43.172 "ddgst": ${ddgst:-false} 00:20:43.172 }, 00:20:43.172 "method": "bdev_nvme_attach_controller" 00:20:43.172 } 00:20:43.172 EOF 00:20:43.172 )") 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:43.172 "params": { 00:20:43.172 "name": "Nvme0", 00:20:43.172 "trtype": "tcp", 00:20:43.172 "traddr": "10.0.0.2", 00:20:43.172 "adrfam": "ipv4", 00:20:43.172 "trsvcid": "4420", 00:20:43.172 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.172 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:43.172 "hdgst": true, 00:20:43.172 "ddgst": true 00:20:43.172 }, 00:20:43.172 "method": "bdev_nvme_attach_controller" 00:20:43.172 }' 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:43.172 23:07:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:43.172 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:43.172 ... 00:20:43.172 fio-3.35 00:20:43.172 Starting 3 threads 00:20:55.370 00:20:55.370 filename0: (groupid=0, jobs=1): err= 0: pid=92234: Tue May 14 23:08:05 2024 00:20:55.370 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(224MiB/10006msec) 00:20:55.370 slat (nsec): min=8350, max=76497, avg=21539.78, stdev=10247.24 00:20:55.370 clat (usec): min=8125, max=48092, avg=16710.29, stdev=4967.34 00:20:55.370 lat (usec): min=8139, max=48108, avg=16731.83, stdev=4969.77 00:20:55.370 clat percentiles (usec): 00:20:55.370 | 1.00th=[11600], 5.00th=[12911], 10.00th=[13435], 20.00th=[13960], 00:20:55.370 | 30.00th=[14484], 40.00th=[15139], 50.00th=[15664], 60.00th=[16188], 00:20:55.370 | 70.00th=[16909], 80.00th=[17957], 90.00th=[19792], 95.00th=[21627], 00:20:55.370 | 99.00th=[41157], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:20:55.370 | 99.99th=[47973] 00:20:55.370 bw ( KiB/s): min= 9216, max=27648, per=34.24%, avg=22924.80, stdev=4393.00, samples=20 00:20:55.370 iops : min= 72, max= 216, avg=179.10, stdev=34.32, samples=20 00:20:55.370 lat (msec) : 10=0.56%, 20=90.74%, 50=8.70% 00:20:55.370 cpu : usr=91.24%, sys=6.89%, ctx=8, majf=0, minf=0 00:20:55.370 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.370 issued rwts: total=1793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.370 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:55.370 filename0: (groupid=0, jobs=1): err= 0: pid=92235: Tue May 14 23:08:05 2024 00:20:55.370 read: IOPS=148, BW=18.6MiB/s (19.5MB/s)(186MiB/10007msec) 00:20:55.370 slat (nsec): min=4971, max=85966, avg=21355.02, stdev=9688.50 00:20:55.370 clat (usec): min=10652, max=59370, avg=20155.31, stdev=6400.93 00:20:55.370 lat (usec): min=10669, max=59389, avg=20176.66, stdev=6401.75 00:20:55.370 clat percentiles (usec): 00:20:55.370 | 1.00th=[14746], 5.00th=[15664], 10.00th=[16188], 20.00th=[16909], 00:20:55.370 | 30.00th=[17171], 40.00th=[17957], 50.00th=[18220], 60.00th=[19006], 00:20:55.370 | 70.00th=[20317], 80.00th=[22414], 90.00th=[24511], 95.00th=[26084], 00:20:55.370 | 99.00th=[52691], 99.50th=[54789], 99.90th=[59507], 99.95th=[59507], 00:20:55.370 | 99.99th=[59507] 00:20:55.370 bw ( KiB/s): min= 6912, max=22272, per=28.39%, avg=19008.00, stdev=3900.05, samples=20 00:20:55.370 iops : min= 54, max= 174, avg=148.50, stdev=30.47, samples=20 00:20:55.370 lat (msec) : 20=67.85%, 50=30.33%, 100=1.82% 00:20:55.370 cpu : usr=91.50%, sys=6.61%, ctx=85, majf=0, minf=9 00:20:55.370 IO depths : 1=5.5%, 2=94.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.370 issued rwts: total=1487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.370 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:55.370 filename0: (groupid=0, jobs=1): err= 0: pid=92236: Tue May 14 23:08:05 2024 00:20:55.370 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(244MiB/10008msec) 00:20:55.370 slat (nsec): min=5077, max=80734, avg=24896.70, stdev=10816.66 00:20:55.370 clat (usec): min=10359, max=56022, avg=15324.47, stdev=4760.55 00:20:55.370 lat (usec): min=10381, max=56039, avg=15349.37, stdev=4764.13 00:20:55.370 clat percentiles (usec): 00:20:55.370 | 1.00th=[11076], 5.00th=[11863], 10.00th=[12256], 20.00th=[12780], 00:20:55.370 | 30.00th=[13173], 40.00th=[13698], 50.00th=[14091], 60.00th=[14615], 00:20:55.370 | 70.00th=[15664], 80.00th=[16909], 90.00th=[18220], 95.00th=[19530], 00:20:55.370 | 99.00th=[38011], 99.50th=[39584], 99.90th=[55313], 99.95th=[55837], 00:20:55.370 | 99.99th=[55837] 00:20:55.370 bw ( KiB/s): min= 9984, max=29696, per=37.32%, avg=24985.60, stdev=4643.35, samples=20 00:20:55.370 iops : min= 78, max= 232, avg=195.20, stdev=36.28, samples=20 00:20:55.370 lat (msec) : 20=95.55%, 50=4.30%, 100=0.15% 00:20:55.370 cpu : usr=89.76%, sys=7.75%, ctx=16, majf=0, minf=9 00:20:55.370 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:55.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:55.370 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:55.370 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:55.370 00:20:55.370 Run status group 0 (all jobs): 00:20:55.370 READ: bw=65.4MiB/s (68.6MB/s), 18.6MiB/s-24.4MiB/s (19.5MB/s-25.6MB/s), io=654MiB (686MB), run=10006-10008msec 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:55.370 ************************************ 00:20:55.370 END TEST fio_dif_digest 00:20:55.370 ************************************ 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.370 00:20:55.370 real 0m10.910s 00:20:55.370 user 0m27.849s 00:20:55.370 sys 0m2.369s 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:55.370 23:08:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:55.370 23:08:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:55.370 23:08:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:55.370 rmmod nvme_tcp 00:20:55.370 rmmod nvme_fabrics 00:20:55.370 rmmod nvme_keyring 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 91474 ']' 00:20:55.370 23:08:06 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 91474 00:20:55.370 23:08:06 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 91474 ']' 00:20:55.371 23:08:06 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 91474 00:20:55.371 23:08:06 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:20:55.371 23:08:06 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:55.371 23:08:06 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91474 00:20:55.371 killing process with pid 91474 00:20:55.371 23:08:06 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:55.371 23:08:06 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:55.371 23:08:06 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91474' 00:20:55.371 23:08:06 nvmf_dif -- common/autotest_common.sh@965 -- # kill 91474 00:20:55.371 [2024-05-14 23:08:06.303863] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:55.371 23:08:06 nvmf_dif -- common/autotest_common.sh@970 -- # wait 91474 00:20:55.371 23:08:06 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:55.371 23:08:06 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:55.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:55.371 Waiting for block devices as requested 00:20:55.371 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:55.371 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:55.371 23:08:07 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:55.371 23:08:07 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:55.371 23:08:07 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.371 23:08:07 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:55.371 23:08:07 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.371 23:08:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:55.371 23:08:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.371 23:08:07 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:55.371 ************************************ 00:20:55.371 END TEST nvmf_dif 00:20:55.371 ************************************ 00:20:55.371 00:20:55.371 real 0m59.537s 00:20:55.371 user 3m48.284s 00:20:55.371 sys 0m16.528s 00:20:55.371 23:08:07 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:55.371 23:08:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:55.371 23:08:07 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:55.371 23:08:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:55.371 23:08:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:55.371 23:08:07 -- common/autotest_common.sh@10 -- # set +x 00:20:55.371 ************************************ 00:20:55.371 START TEST nvmf_abort_qd_sizes 00:20:55.371 ************************************ 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:55.371 * Looking for test storage... 00:20:55.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:55.371 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:55.372 Cannot find device "nvmf_tgt_br" 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.372 Cannot find device "nvmf_tgt_br2" 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:55.372 Cannot find device "nvmf_tgt_br" 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:55.372 Cannot find device "nvmf_tgt_br2" 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:55.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:20:55.372 00:20:55.372 --- 10.0.0.2 ping statistics --- 00:20:55.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.372 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:55.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:20:55.372 00:20:55.372 --- 10.0.0.3 ping statistics --- 00:20:55.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.372 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:55.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:55.372 00:20:55.372 --- 10.0.0.1 ping statistics --- 00:20:55.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.372 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:55.372 23:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:55.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:55.938 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:56.196 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=92822 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 92822 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 92822 ']' 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:56.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:56.196 23:08:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:56.196 [2024-05-14 23:08:08.536176] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:20:56.196 [2024-05-14 23:08:08.536292] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.454 [2024-05-14 23:08:08.670356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.454 [2024-05-14 23:08:08.761183] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.454 [2024-05-14 23:08:08.761263] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.454 [2024-05-14 23:08:08.761297] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.454 [2024-05-14 23:08:08.761311] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.454 [2024-05-14 23:08:08.761323] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.454 [2024-05-14 23:08:08.761450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.454 [2024-05-14 23:08:08.761543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.454 [2024-05-14 23:08:08.762045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.454 [2024-05-14 23:08:08.762063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:57.387 23:08:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:57.387 ************************************ 00:20:57.387 START TEST spdk_target_abort 00:20:57.387 ************************************ 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:57.387 spdk_targetn1 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:57.387 [2024-05-14 23:08:09.767834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.387 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:57.645 [2024-05-14 23:08:09.800294] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:57.645 [2024-05-14 23:08:09.800614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:57.645 23:08:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:00.928 Initializing NVMe Controllers 00:21:00.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:00.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:00.928 Initialization complete. Launching workers. 00:21:00.928 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10698, failed: 0 00:21:00.928 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1053, failed to submit 9645 00:21:00.928 success 766, unsuccess 287, failed 0 00:21:00.928 23:08:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:00.928 23:08:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:04.212 Initializing NVMe Controllers 00:21:04.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:04.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:04.212 Initialization complete. Launching workers. 00:21:04.212 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5641, failed: 0 00:21:04.212 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1166, failed to submit 4475 00:21:04.212 success 237, unsuccess 929, failed 0 00:21:04.212 23:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:04.212 23:08:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:07.571 Initializing NVMe Controllers 00:21:07.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:07.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:07.571 Initialization complete. Launching workers. 00:21:07.571 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28828, failed: 0 00:21:07.571 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2531, failed to submit 26297 00:21:07.571 success 341, unsuccess 2190, failed 0 00:21:07.571 23:08:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:07.571 23:08:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.571 23:08:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:07.571 23:08:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.571 23:08:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:07.571 23:08:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.571 23:08:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 92822 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 92822 ']' 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 92822 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92822 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92822' 00:21:08.507 killing process with pid 92822 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 92822 00:21:08.507 [2024-05-14 23:08:20.681435] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 92822 00:21:08.507 00:21:08.507 real 0m11.199s 00:21:08.507 user 0m44.850s 00:21:08.507 sys 0m1.880s 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:08.507 ************************************ 00:21:08.507 END TEST spdk_target_abort 00:21:08.507 ************************************ 00:21:08.507 23:08:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:08.765 23:08:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:08.765 23:08:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:08.765 23:08:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:08.765 23:08:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:08.765 ************************************ 00:21:08.765 START TEST kernel_target_abort 00:21:08.765 ************************************ 00:21:08.765 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:21:08.765 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:08.765 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:21:08.765 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:08.765 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:08.766 23:08:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:09.023 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:09.023 Waiting for block devices as requested 00:21:09.023 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:09.282 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:09.282 No valid GPT data, bailing 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:09.282 No valid GPT data, bailing 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:09.282 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:09.540 No valid GPT data, bailing 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:09.540 No valid GPT data, bailing 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de --hostid=58e20ac9-ba72-448e-a374-94608cfdd9de -a 10.0.0.1 -t tcp -s 4420 00:21:09.540 00:21:09.540 Discovery Log Number of Records 2, Generation counter 2 00:21:09.540 =====Discovery Log Entry 0====== 00:21:09.540 trtype: tcp 00:21:09.540 adrfam: ipv4 00:21:09.540 subtype: current discovery subsystem 00:21:09.540 treq: not specified, sq flow control disable supported 00:21:09.540 portid: 1 00:21:09.540 trsvcid: 4420 00:21:09.540 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:09.540 traddr: 10.0.0.1 00:21:09.540 eflags: none 00:21:09.540 sectype: none 00:21:09.540 =====Discovery Log Entry 1====== 00:21:09.540 trtype: tcp 00:21:09.540 adrfam: ipv4 00:21:09.540 subtype: nvme subsystem 00:21:09.540 treq: not specified, sq flow control disable supported 00:21:09.540 portid: 1 00:21:09.540 trsvcid: 4420 00:21:09.540 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:09.540 traddr: 10.0.0.1 00:21:09.540 eflags: none 00:21:09.540 sectype: none 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:09.540 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:09.541 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:09.541 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:09.541 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:09.541 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:09.541 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:09.541 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:09.541 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:09.541 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:09.541 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:09.541 23:08:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:12.854 Initializing NVMe Controllers 00:21:12.854 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:12.854 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:12.854 Initialization complete. Launching workers. 00:21:12.854 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33212, failed: 0 00:21:12.854 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33212, failed to submit 0 00:21:12.854 success 0, unsuccess 33212, failed 0 00:21:12.854 23:08:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:12.854 23:08:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:16.140 Initializing NVMe Controllers 00:21:16.140 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:16.140 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:16.140 Initialization complete. Launching workers. 00:21:16.140 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64100, failed: 0 00:21:16.140 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26643, failed to submit 37457 00:21:16.140 success 0, unsuccess 26643, failed 0 00:21:16.140 23:08:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:16.140 23:08:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:19.425 Initializing NVMe Controllers 00:21:19.425 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:19.425 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:19.425 Initialization complete. Launching workers. 00:21:19.425 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71283, failed: 0 00:21:19.425 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17794, failed to submit 53489 00:21:19.425 success 0, unsuccess 17794, failed 0 00:21:19.425 23:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:19.425 23:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:19.425 23:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:19.425 23:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:19.425 23:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:19.425 23:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:19.425 23:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:19.425 23:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:19.425 23:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:19.425 23:08:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:19.684 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:21.583 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:21.583 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:21.583 00:21:21.583 real 0m12.822s 00:21:21.583 user 0m6.170s 00:21:21.583 sys 0m3.961s 00:21:21.583 23:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:21.583 ************************************ 00:21:21.583 23:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:21.583 END TEST kernel_target_abort 00:21:21.583 ************************************ 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:21.583 rmmod nvme_tcp 00:21:21.583 rmmod nvme_fabrics 00:21:21.583 rmmod nvme_keyring 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 92822 ']' 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 92822 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 92822 ']' 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 92822 00:21:21.583 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (92822) - No such process 00:21:21.583 Process with pid 92822 is not found 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 92822 is not found' 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:21.583 23:08:33 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:21.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:21.840 Waiting for block devices as requested 00:21:21.840 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:22.098 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:22.098 23:08:34 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:22.098 23:08:34 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:22.098 23:08:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.098 23:08:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.098 23:08:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.098 23:08:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:22.098 23:08:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.098 23:08:34 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:22.098 00:21:22.098 real 0m27.278s 00:21:22.098 user 0m52.203s 00:21:22.098 sys 0m7.034s 00:21:22.098 23:08:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:22.098 23:08:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:22.098 ************************************ 00:21:22.098 END TEST nvmf_abort_qd_sizes 00:21:22.098 ************************************ 00:21:22.098 23:08:34 -- spdk/autotest.sh@291 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:22.098 23:08:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:22.098 23:08:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:22.098 23:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:22.098 ************************************ 00:21:22.098 START TEST keyring_file 00:21:22.098 ************************************ 00:21:22.098 23:08:34 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:22.355 * Looking for test storage... 00:21:22.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:22.355 23:08:34 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:58e20ac9-ba72-448e-a374-94608cfdd9de 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=58e20ac9-ba72-448e-a374-94608cfdd9de 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:22.355 23:08:34 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.355 23:08:34 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.355 23:08:34 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.355 23:08:34 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.355 23:08:34 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.355 23:08:34 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.355 23:08:34 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:22.355 23:08:34 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:22.355 23:08:34 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:22.355 23:08:34 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:22.355 23:08:34 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:22.355 23:08:34 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:22.355 23:08:34 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:22.355 23:08:34 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.upA8pIPDLK 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:22.355 23:08:34 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.upA8pIPDLK 00:21:22.355 23:08:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.upA8pIPDLK 00:21:22.355 23:08:34 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.upA8pIPDLK 00:21:22.355 23:08:34 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:22.356 23:08:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:22.356 23:08:34 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:22.356 23:08:34 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:22.356 23:08:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:22.356 23:08:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:22.356 23:08:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MC8y8xs5T4 00:21:22.356 23:08:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:22.356 23:08:34 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:22.356 23:08:34 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:22.356 23:08:34 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:22.356 23:08:34 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:22.356 23:08:34 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:22.356 23:08:34 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:22.356 23:08:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MC8y8xs5T4 00:21:22.356 23:08:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MC8y8xs5T4 00:21:22.356 23:08:34 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.MC8y8xs5T4 00:21:22.356 23:08:34 keyring_file -- keyring/file.sh@30 -- # tgtpid=93704 00:21:22.356 23:08:34 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:22.356 23:08:34 keyring_file -- keyring/file.sh@32 -- # waitforlisten 93704 00:21:22.356 23:08:34 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 93704 ']' 00:21:22.356 23:08:34 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.356 23:08:34 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:22.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.356 23:08:34 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.356 23:08:34 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:22.356 23:08:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:22.356 [2024-05-14 23:08:34.743509] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:22.356 [2024-05-14 23:08:34.743638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93704 ] 00:21:22.614 [2024-05-14 23:08:34.883266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.614 [2024-05-14 23:08:34.969638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:21:23.590 23:08:35 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:23.590 [2024-05-14 23:08:35.839852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.590 null0 00:21:23.590 [2024-05-14 23:08:35.871742] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:23.590 [2024-05-14 23:08:35.871857] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.590 [2024-05-14 23:08:35.872068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:23.590 [2024-05-14 23:08:35.879820] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.590 23:08:35 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:23.590 [2024-05-14 23:08:35.891817] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:23.590 2024/05/14 23:08:35 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:21:23.590 request: 00:21:23.590 { 00:21:23.590 "method": "nvmf_subsystem_add_listener", 00:21:23.590 "params": { 00:21:23.590 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.590 "secure_channel": false, 00:21:23.590 "listen_address": { 00:21:23.590 "trtype": "tcp", 00:21:23.590 "traddr": "127.0.0.1", 00:21:23.590 "trsvcid": "4420" 00:21:23.590 } 00:21:23.590 } 00:21:23.590 } 00:21:23.590 Got JSON-RPC error response 00:21:23.590 GoRPCClient: error on JSON-RPC call 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:23.590 23:08:35 keyring_file -- keyring/file.sh@46 -- # bperfpid=93739 00:21:23.590 23:08:35 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:23.590 23:08:35 keyring_file -- keyring/file.sh@48 -- # waitforlisten 93739 /var/tmp/bperf.sock 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 93739 ']' 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:23.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:23.590 23:08:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:23.590 [2024-05-14 23:08:35.965550] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:23.590 [2024-05-14 23:08:35.965682] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93739 ] 00:21:23.848 [2024-05-14 23:08:36.104504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.848 [2024-05-14 23:08:36.170612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.106 23:08:36 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:24.106 23:08:36 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:21:24.106 23:08:36 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.upA8pIPDLK 00:21:24.106 23:08:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.upA8pIPDLK 00:21:24.363 23:08:36 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MC8y8xs5T4 00:21:24.363 23:08:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MC8y8xs5T4 00:21:24.621 23:08:36 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:24.621 23:08:36 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:24.621 23:08:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.621 23:08:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:24.621 23:08:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.879 23:08:37 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.upA8pIPDLK == \/\t\m\p\/\t\m\p\.\u\p\A\8\p\I\P\D\L\K ]] 00:21:24.879 23:08:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:24.879 23:08:37 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:24.879 23:08:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:24.879 23:08:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.879 23:08:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:25.446 23:08:37 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.MC8y8xs5T4 == \/\t\m\p\/\t\m\p\.\M\C\8\y\8\x\s\5\T\4 ]] 00:21:25.446 23:08:37 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:25.446 23:08:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:25.446 23:08:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:25.446 23:08:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:25.446 23:08:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:25.446 23:08:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:25.705 23:08:38 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:25.705 23:08:38 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:25.705 23:08:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:25.705 23:08:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:25.705 23:08:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:25.705 23:08:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:25.705 23:08:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:25.963 23:08:38 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:25.963 23:08:38 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:25.963 23:08:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:26.221 [2024-05-14 23:08:38.493228] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.221 nvme0n1 00:21:26.221 23:08:38 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:26.221 23:08:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:26.221 23:08:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:26.221 23:08:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:26.221 23:08:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:26.221 23:08:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:26.801 23:08:38 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:26.801 23:08:38 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:26.801 23:08:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:26.801 23:08:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:26.801 23:08:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:26.801 23:08:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:26.801 23:08:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:27.064 23:08:39 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:27.064 23:08:39 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:27.064 Running I/O for 1 seconds... 00:21:27.999 00:21:27.999 Latency(us) 00:21:27.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.999 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:27.999 nvme0n1 : 1.01 10588.54 41.36 0.00 0.00 12000.46 9353.77 23712.12 00:21:27.999 =================================================================================================================== 00:21:27.999 Total : 10588.54 41.36 0.00 0.00 12000.46 9353.77 23712.12 00:21:27.999 0 00:21:27.999 23:08:40 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:27.999 23:08:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:28.257 23:08:40 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:28.257 23:08:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:28.257 23:08:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:28.257 23:08:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.257 23:08:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:28.257 23:08:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:28.823 23:08:40 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:28.823 23:08:40 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:28.823 23:08:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:28.823 23:08:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:28.823 23:08:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.823 23:08:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:28.823 23:08:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:29.082 23:08:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:29.082 23:08:41 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:29.082 23:08:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:29.082 23:08:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:29.082 23:08:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:29.082 23:08:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.082 23:08:41 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:29.082 23:08:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.082 23:08:41 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:29.082 23:08:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:29.341 [2024-05-14 23:08:41.510353] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:29.341 [2024-05-14 23:08:41.510967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0bf10 (107): Transport endpoint is not connected 00:21:29.341 [2024-05-14 23:08:41.511952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0bf10 (9): Bad file descriptor 00:21:29.341 [2024-05-14 23:08:41.512948] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:29.341 [2024-05-14 23:08:41.512976] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:29.341 [2024-05-14 23:08:41.512988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:29.341 2024/05/14 23:08:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:29.341 request: 00:21:29.341 { 00:21:29.341 "method": "bdev_nvme_attach_controller", 00:21:29.341 "params": { 00:21:29.341 "name": "nvme0", 00:21:29.341 "trtype": "tcp", 00:21:29.341 "traddr": "127.0.0.1", 00:21:29.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:29.341 "adrfam": "ipv4", 00:21:29.341 "trsvcid": "4420", 00:21:29.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:29.341 "psk": "key1" 00:21:29.341 } 00:21:29.341 } 00:21:29.341 Got JSON-RPC error response 00:21:29.341 GoRPCClient: error on JSON-RPC call 00:21:29.341 23:08:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:29.341 23:08:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:29.341 23:08:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:29.341 23:08:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:29.341 23:08:41 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:29.341 23:08:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:29.341 23:08:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:29.341 23:08:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:29.341 23:08:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:29.341 23:08:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:29.599 23:08:41 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:29.599 23:08:41 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:29.599 23:08:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:29.599 23:08:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:29.599 23:08:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:29.599 23:08:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:29.599 23:08:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:29.858 23:08:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:29.858 23:08:42 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:29.858 23:08:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:30.116 23:08:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:30.116 23:08:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:30.374 23:08:42 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:30.374 23:08:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:30.374 23:08:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:30.631 23:08:42 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:30.631 23:08:42 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.upA8pIPDLK 00:21:30.631 23:08:42 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.upA8pIPDLK 00:21:30.631 23:08:42 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:30.631 23:08:42 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.upA8pIPDLK 00:21:30.631 23:08:42 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:30.631 23:08:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.631 23:08:42 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:30.631 23:08:42 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.631 23:08:42 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.upA8pIPDLK 00:21:30.631 23:08:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.upA8pIPDLK 00:21:30.888 [2024-05-14 23:08:43.163365] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.upA8pIPDLK': 0100660 00:21:30.888 [2024-05-14 23:08:43.163416] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:30.888 2024/05/14 23:08:43 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.upA8pIPDLK], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:21:30.888 request: 00:21:30.888 { 00:21:30.888 "method": "keyring_file_add_key", 00:21:30.888 "params": { 00:21:30.888 "name": "key0", 00:21:30.888 "path": "/tmp/tmp.upA8pIPDLK" 00:21:30.888 } 00:21:30.888 } 00:21:30.888 Got JSON-RPC error response 00:21:30.888 GoRPCClient: error on JSON-RPC call 00:21:30.888 23:08:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:30.888 23:08:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:30.888 23:08:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:30.888 23:08:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:30.888 23:08:43 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.upA8pIPDLK 00:21:30.888 23:08:43 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.upA8pIPDLK 00:21:30.888 23:08:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.upA8pIPDLK 00:21:31.157 23:08:43 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.upA8pIPDLK 00:21:31.157 23:08:43 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:31.157 23:08:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:31.157 23:08:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:31.157 23:08:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:31.157 23:08:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.157 23:08:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:31.414 23:08:43 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:31.414 23:08:43 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:31.414 23:08:43 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:31.414 23:08:43 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:31.414 23:08:43 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:31.414 23:08:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.414 23:08:43 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:31.414 23:08:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.414 23:08:43 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:31.414 23:08:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:31.672 [2024-05-14 23:08:43.995532] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.upA8pIPDLK': No such file or directory 00:21:31.672 [2024-05-14 23:08:43.995584] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:31.672 [2024-05-14 23:08:43.995611] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:31.672 [2024-05-14 23:08:43.995620] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:31.672 [2024-05-14 23:08:43.995629] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:31.672 2024/05/14 23:08:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:21:31.673 request: 00:21:31.673 { 00:21:31.673 "method": "bdev_nvme_attach_controller", 00:21:31.673 "params": { 00:21:31.673 "name": "nvme0", 00:21:31.673 "trtype": "tcp", 00:21:31.673 "traddr": "127.0.0.1", 00:21:31.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:31.673 "adrfam": "ipv4", 00:21:31.673 "trsvcid": "4420", 00:21:31.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:31.673 "psk": "key0" 00:21:31.673 } 00:21:31.673 } 00:21:31.673 Got JSON-RPC error response 00:21:31.673 GoRPCClient: error on JSON-RPC call 00:21:31.673 23:08:44 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:31.673 23:08:44 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.673 23:08:44 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.673 23:08:44 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.673 23:08:44 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:31.673 23:08:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:31.931 23:08:44 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:31.931 23:08:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:31.931 23:08:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:31.931 23:08:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:31.931 23:08:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:31.931 23:08:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:31.931 23:08:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PNnyKSfr8l 00:21:31.931 23:08:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:31.931 23:08:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:31.931 23:08:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:31.931 23:08:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:31.931 23:08:44 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:31.931 23:08:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:31.931 23:08:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:32.189 23:08:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PNnyKSfr8l 00:21:32.189 23:08:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PNnyKSfr8l 00:21:32.189 23:08:44 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.PNnyKSfr8l 00:21:32.189 23:08:44 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PNnyKSfr8l 00:21:32.189 23:08:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PNnyKSfr8l 00:21:32.448 23:08:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:32.448 23:08:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:32.706 nvme0n1 00:21:32.706 23:08:44 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:32.706 23:08:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:32.706 23:08:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:32.706 23:08:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:32.706 23:08:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:32.706 23:08:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:32.965 23:08:45 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:32.965 23:08:45 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:32.965 23:08:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:33.223 23:08:45 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:33.223 23:08:45 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:33.223 23:08:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.223 23:08:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.223 23:08:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:33.481 23:08:45 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:33.481 23:08:45 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:33.481 23:08:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:33.481 23:08:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:33.481 23:08:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.481 23:08:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:33.481 23:08:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.047 23:08:46 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:34.048 23:08:46 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:34.048 23:08:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:34.048 23:08:46 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:34.048 23:08:46 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:34.048 23:08:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.614 23:08:46 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:34.614 23:08:46 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PNnyKSfr8l 00:21:34.614 23:08:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PNnyKSfr8l 00:21:34.614 23:08:46 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MC8y8xs5T4 00:21:34.614 23:08:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MC8y8xs5T4 00:21:35.181 23:08:47 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:35.181 23:08:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:35.438 nvme0n1 00:21:35.438 23:08:47 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:35.438 23:08:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:35.708 23:08:48 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:35.708 "subsystems": [ 00:21:35.708 { 00:21:35.708 "subsystem": "keyring", 00:21:35.708 "config": [ 00:21:35.708 { 00:21:35.708 "method": "keyring_file_add_key", 00:21:35.708 "params": { 00:21:35.708 "name": "key0", 00:21:35.708 "path": "/tmp/tmp.PNnyKSfr8l" 00:21:35.708 } 00:21:35.708 }, 00:21:35.708 { 00:21:35.708 "method": "keyring_file_add_key", 00:21:35.708 "params": { 00:21:35.708 "name": "key1", 00:21:35.708 "path": "/tmp/tmp.MC8y8xs5T4" 00:21:35.708 } 00:21:35.708 } 00:21:35.708 ] 00:21:35.708 }, 00:21:35.708 { 00:21:35.708 "subsystem": "iobuf", 00:21:35.708 "config": [ 00:21:35.708 { 00:21:35.708 "method": "iobuf_set_options", 00:21:35.708 "params": { 00:21:35.708 "large_bufsize": 135168, 00:21:35.708 "large_pool_count": 1024, 00:21:35.708 "small_bufsize": 8192, 00:21:35.708 "small_pool_count": 8192 00:21:35.708 } 00:21:35.708 } 00:21:35.708 ] 00:21:35.708 }, 00:21:35.708 { 00:21:35.708 "subsystem": "sock", 00:21:35.708 "config": [ 00:21:35.708 { 00:21:35.708 "method": "sock_impl_set_options", 00:21:35.708 "params": { 00:21:35.708 "enable_ktls": false, 00:21:35.708 "enable_placement_id": 0, 00:21:35.708 "enable_quickack": false, 00:21:35.708 "enable_recv_pipe": true, 00:21:35.708 "enable_zerocopy_send_client": false, 00:21:35.708 "enable_zerocopy_send_server": true, 00:21:35.708 "impl_name": "posix", 00:21:35.708 "recv_buf_size": 2097152, 00:21:35.708 "send_buf_size": 2097152, 00:21:35.708 "tls_version": 0, 00:21:35.708 "zerocopy_threshold": 0 00:21:35.708 } 00:21:35.708 }, 00:21:35.708 { 00:21:35.708 "method": "sock_impl_set_options", 00:21:35.708 "params": { 00:21:35.708 "enable_ktls": false, 00:21:35.708 "enable_placement_id": 0, 00:21:35.708 "enable_quickack": false, 00:21:35.708 "enable_recv_pipe": true, 00:21:35.708 "enable_zerocopy_send_client": false, 00:21:35.708 "enable_zerocopy_send_server": true, 00:21:35.708 "impl_name": "ssl", 00:21:35.708 "recv_buf_size": 4096, 00:21:35.708 "send_buf_size": 4096, 00:21:35.708 "tls_version": 0, 00:21:35.708 "zerocopy_threshold": 0 00:21:35.708 } 00:21:35.708 } 00:21:35.708 ] 00:21:35.708 }, 00:21:35.708 { 00:21:35.708 "subsystem": "vmd", 00:21:35.708 "config": [] 00:21:35.708 }, 00:21:35.708 { 00:21:35.708 "subsystem": "accel", 00:21:35.708 "config": [ 00:21:35.708 { 00:21:35.708 "method": "accel_set_options", 00:21:35.708 "params": { 00:21:35.708 "buf_count": 2048, 00:21:35.708 "large_cache_size": 16, 00:21:35.708 "sequence_count": 2048, 00:21:35.708 "small_cache_size": 128, 00:21:35.708 "task_count": 2048 00:21:35.708 } 00:21:35.708 } 00:21:35.708 ] 00:21:35.708 }, 00:21:35.708 { 00:21:35.708 "subsystem": "bdev", 00:21:35.708 "config": [ 00:21:35.708 { 00:21:35.708 "method": "bdev_set_options", 00:21:35.708 "params": { 00:21:35.708 "bdev_auto_examine": true, 00:21:35.708 "bdev_io_cache_size": 256, 00:21:35.708 "bdev_io_pool_size": 65535, 00:21:35.708 "iobuf_large_cache_size": 16, 00:21:35.708 "iobuf_small_cache_size": 128 00:21:35.708 } 00:21:35.708 }, 00:21:35.708 { 00:21:35.708 "method": "bdev_raid_set_options", 00:21:35.708 "params": { 00:21:35.708 "process_window_size_kb": 1024 00:21:35.708 } 00:21:35.708 }, 00:21:35.708 { 00:21:35.708 "method": "bdev_iscsi_set_options", 00:21:35.708 "params": { 00:21:35.708 "timeout_sec": 30 00:21:35.708 } 00:21:35.708 }, 00:21:35.708 { 00:21:35.708 "method": "bdev_nvme_set_options", 00:21:35.708 "params": { 00:21:35.709 "action_on_timeout": "none", 00:21:35.709 "allow_accel_sequence": false, 00:21:35.709 "arbitration_burst": 0, 00:21:35.709 "bdev_retry_count": 3, 00:21:35.709 "ctrlr_loss_timeout_sec": 0, 00:21:35.709 "delay_cmd_submit": true, 00:21:35.709 "dhchap_dhgroups": [ 00:21:35.709 "null", 00:21:35.709 "ffdhe2048", 00:21:35.709 "ffdhe3072", 00:21:35.709 "ffdhe4096", 00:21:35.709 "ffdhe6144", 00:21:35.709 "ffdhe8192" 00:21:35.709 ], 00:21:35.709 "dhchap_digests": [ 00:21:35.709 "sha256", 00:21:35.709 "sha384", 00:21:35.709 "sha512" 00:21:35.709 ], 00:21:35.709 "disable_auto_failback": false, 00:21:35.709 "fast_io_fail_timeout_sec": 0, 00:21:35.709 "generate_uuids": false, 00:21:35.709 "high_priority_weight": 0, 00:21:35.709 "io_path_stat": false, 00:21:35.709 "io_queue_requests": 512, 00:21:35.709 "keep_alive_timeout_ms": 10000, 00:21:35.709 "low_priority_weight": 0, 00:21:35.709 "medium_priority_weight": 0, 00:21:35.709 "nvme_adminq_poll_period_us": 10000, 00:21:35.709 "nvme_error_stat": false, 00:21:35.709 "nvme_ioq_poll_period_us": 0, 00:21:35.709 "rdma_cm_event_timeout_ms": 0, 00:21:35.709 "rdma_max_cq_size": 0, 00:21:35.709 "rdma_srq_size": 0, 00:21:35.709 "reconnect_delay_sec": 0, 00:21:35.709 "timeout_admin_us": 0, 00:21:35.709 "timeout_us": 0, 00:21:35.709 "transport_ack_timeout": 0, 00:21:35.709 "transport_retry_count": 4, 00:21:35.709 "transport_tos": 0 00:21:35.709 } 00:21:35.709 }, 00:21:35.709 { 00:21:35.709 "method": "bdev_nvme_attach_controller", 00:21:35.709 "params": { 00:21:35.709 "adrfam": "IPv4", 00:21:35.709 "ctrlr_loss_timeout_sec": 0, 00:21:35.709 "ddgst": false, 00:21:35.709 "fast_io_fail_timeout_sec": 0, 00:21:35.709 "hdgst": false, 00:21:35.709 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:35.709 "name": "nvme0", 00:21:35.709 "prchk_guard": false, 00:21:35.709 "prchk_reftag": false, 00:21:35.709 "psk": "key0", 00:21:35.709 "reconnect_delay_sec": 0, 00:21:35.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:35.709 "traddr": "127.0.0.1", 00:21:35.709 "trsvcid": "4420", 00:21:35.709 "trtype": "TCP" 00:21:35.709 } 00:21:35.709 }, 00:21:35.709 { 00:21:35.709 "method": "bdev_nvme_set_hotplug", 00:21:35.709 "params": { 00:21:35.709 "enable": false, 00:21:35.709 "period_us": 100000 00:21:35.709 } 00:21:35.709 }, 00:21:35.709 { 00:21:35.709 "method": "bdev_wait_for_examine" 00:21:35.709 } 00:21:35.709 ] 00:21:35.709 }, 00:21:35.709 { 00:21:35.709 "subsystem": "nbd", 00:21:35.709 "config": [] 00:21:35.709 } 00:21:35.709 ] 00:21:35.709 }' 00:21:35.709 23:08:48 keyring_file -- keyring/file.sh@114 -- # killprocess 93739 00:21:35.709 23:08:48 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 93739 ']' 00:21:35.709 23:08:48 keyring_file -- common/autotest_common.sh@950 -- # kill -0 93739 00:21:35.709 23:08:48 keyring_file -- common/autotest_common.sh@951 -- # uname 00:21:35.709 23:08:48 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:35.709 23:08:48 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93739 00:21:35.709 23:08:48 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:35.709 23:08:48 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:35.709 killing process with pid 93739 00:21:35.709 23:08:48 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93739' 00:21:35.709 23:08:48 keyring_file -- common/autotest_common.sh@965 -- # kill 93739 00:21:35.709 Received shutdown signal, test time was about 1.000000 seconds 00:21:35.709 00:21:35.709 Latency(us) 00:21:35.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.709 =================================================================================================================== 00:21:35.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.709 23:08:48 keyring_file -- common/autotest_common.sh@970 -- # wait 93739 00:21:35.980 23:08:48 keyring_file -- keyring/file.sh@117 -- # bperfpid=94209 00:21:35.980 23:08:48 keyring_file -- keyring/file.sh@119 -- # waitforlisten 94209 /var/tmp/bperf.sock 00:21:35.980 23:08:48 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 94209 ']' 00:21:35.980 23:08:48 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:35.980 23:08:48 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:35.980 23:08:48 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:35.980 23:08:48 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:35.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:35.980 23:08:48 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:35.980 23:08:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:35.980 23:08:48 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:35.980 "subsystems": [ 00:21:35.980 { 00:21:35.980 "subsystem": "keyring", 00:21:35.980 "config": [ 00:21:35.980 { 00:21:35.980 "method": "keyring_file_add_key", 00:21:35.980 "params": { 00:21:35.980 "name": "key0", 00:21:35.980 "path": "/tmp/tmp.PNnyKSfr8l" 00:21:35.980 } 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "method": "keyring_file_add_key", 00:21:35.980 "params": { 00:21:35.980 "name": "key1", 00:21:35.980 "path": "/tmp/tmp.MC8y8xs5T4" 00:21:35.980 } 00:21:35.980 } 00:21:35.980 ] 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "subsystem": "iobuf", 00:21:35.980 "config": [ 00:21:35.980 { 00:21:35.980 "method": "iobuf_set_options", 00:21:35.980 "params": { 00:21:35.980 "large_bufsize": 135168, 00:21:35.980 "large_pool_count": 1024, 00:21:35.980 "small_bufsize": 8192, 00:21:35.980 "small_pool_count": 8192 00:21:35.980 } 00:21:35.980 } 00:21:35.980 ] 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "subsystem": "sock", 00:21:35.980 "config": [ 00:21:35.980 { 00:21:35.980 "method": "sock_impl_set_options", 00:21:35.980 "params": { 00:21:35.980 "enable_ktls": false, 00:21:35.980 "enable_placement_id": 0, 00:21:35.980 "enable_quickack": false, 00:21:35.980 "enable_recv_pipe": true, 00:21:35.980 "enable_zerocopy_send_client": false, 00:21:35.980 "enable_zerocopy_send_server": true, 00:21:35.980 "impl_name": "posix", 00:21:35.980 "recv_buf_size": 2097152, 00:21:35.980 "send_buf_size": 2097152, 00:21:35.980 "tls_version": 0, 00:21:35.980 "zerocopy_threshold": 0 00:21:35.980 } 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "method": "sock_impl_set_options", 00:21:35.980 "params": { 00:21:35.980 "enable_ktls": false, 00:21:35.980 "enable_placement_id": 0, 00:21:35.980 "enable_quickack": false, 00:21:35.980 "enable_recv_pipe": true, 00:21:35.980 "enable_zerocopy_send_client": false, 00:21:35.980 "enable_zerocopy_send_server": true, 00:21:35.980 "impl_name": "ssl", 00:21:35.980 "recv_buf_size": 4096, 00:21:35.980 "send_buf_size": 4096, 00:21:35.980 "tls_version": 0, 00:21:35.980 "zerocopy_threshold": 0 00:21:35.980 } 00:21:35.980 } 00:21:35.980 ] 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "subsystem": "vmd", 00:21:35.980 "config": [] 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "subsystem": "accel", 00:21:35.980 "config": [ 00:21:35.980 { 00:21:35.980 "method": "accel_set_options", 00:21:35.980 "params": { 00:21:35.980 "buf_count": 2048, 00:21:35.980 "large_cache_size": 16, 00:21:35.980 "sequence_count": 2048, 00:21:35.980 "small_cache_size": 128, 00:21:35.980 "task_count": 2048 00:21:35.980 } 00:21:35.980 } 00:21:35.980 ] 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "subsystem": "bdev", 00:21:35.980 "config": [ 00:21:35.980 { 00:21:35.980 "method": "bdev_set_options", 00:21:35.980 "params": { 00:21:35.980 "bdev_auto_examine": true, 00:21:35.980 "bdev_io_cache_size": 256, 00:21:35.980 "bdev_io_pool_size": 65535, 00:21:35.980 "iobuf_large_cache_size": 16, 00:21:35.980 "iobuf_small_cache_size": 128 00:21:35.980 } 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "method": "bdev_raid_set_options", 00:21:35.980 "params": { 00:21:35.980 "process_window_size_kb": 1024 00:21:35.980 } 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "method": "bdev_iscsi_set_options", 00:21:35.980 "params": { 00:21:35.980 "timeout_sec": 30 00:21:35.980 } 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "method": "bdev_nvme_set_options", 00:21:35.980 "params": { 00:21:35.980 "action_on_timeout": "none", 00:21:35.980 "allow_accel_sequence": false, 00:21:35.980 "arbitration_burst": 0, 00:21:35.980 "bdev_retry_count": 3, 00:21:35.980 "ctrlr_loss_timeout_sec": 0, 00:21:35.980 "delay_cmd_submit": true, 00:21:35.980 "dhchap_dhgroups": [ 00:21:35.980 "null", 00:21:35.980 "ffdhe2048", 00:21:35.980 "ffdhe3072", 00:21:35.980 "ffdhe4096", 00:21:35.980 "ffdhe6144", 00:21:35.980 "ffdhe8192" 00:21:35.980 ], 00:21:35.980 "dhchap_digests": [ 00:21:35.980 "sha256", 00:21:35.980 "sha384", 00:21:35.980 "sha512" 00:21:35.980 ], 00:21:35.980 "disable_auto_failback": false, 00:21:35.980 "fast_io_fail_timeout_sec": 0, 00:21:35.980 "generate_uuids": false, 00:21:35.980 "high_priority_weight": 0, 00:21:35.980 "io_path_stat": false, 00:21:35.980 "io_queue_requests": 512, 00:21:35.980 "keep_alive_timeout_ms": 10000, 00:21:35.980 "low_priority_weight": 0, 00:21:35.980 "medium_priority_weight": 0, 00:21:35.980 "nvme_adminq_poll_period_us": 10000, 00:21:35.980 "nvme_error_stat": false, 00:21:35.980 "nvme_ioq_poll_period_us": 0, 00:21:35.980 "rdma_cm_event_timeout_ms": 0, 00:21:35.980 "rdma_max_cq_size": 0, 00:21:35.980 "rdma_srq_size": 0, 00:21:35.980 "reconnect_delay_sec": 0, 00:21:35.980 "timeout_admin_us": 0, 00:21:35.980 "timeout_us": 0, 00:21:35.980 "transport_ack_timeout": 0, 00:21:35.980 "transport_retry_count": 4, 00:21:35.980 "transport_tos": 0 00:21:35.980 } 00:21:35.980 }, 00:21:35.980 { 00:21:35.980 "method": "bdev_nvme_attach_controller", 00:21:35.980 "params": { 00:21:35.980 "adrfam": "IPv4", 00:21:35.980 "ctrlr_loss_timeout_sec": 0, 00:21:35.980 "ddgst": false, 00:21:35.980 "fast_io_fail_timeout_sec": 0, 00:21:35.980 "hdgst": false, 00:21:35.980 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:35.980 "name": "nvme0", 00:21:35.981 "prchk_guard": false, 00:21:35.981 "prchk_reftag": false, 00:21:35.981 "psk": "key0", 00:21:35.981 "reconnect_delay_sec": 0, 00:21:35.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:35.981 "traddr": "127.0.0.1", 00:21:35.981 "trsvcid": "4420", 00:21:35.981 "trtype": "TCP" 00:21:35.981 } 00:21:35.981 }, 00:21:35.981 { 00:21:35.981 "method": "bdev_nvme_set_hotplug", 00:21:35.981 "params": { 00:21:35.981 "enable": false, 00:21:35.981 "period_us": 100000 00:21:35.981 } 00:21:35.981 }, 00:21:35.981 { 00:21:35.981 "method": "bdev_wait_for_examine" 00:21:35.981 } 00:21:35.981 ] 00:21:35.981 }, 00:21:35.981 { 00:21:35.981 "subsystem": "nbd", 00:21:35.981 "config": [] 00:21:35.981 } 00:21:35.981 ] 00:21:35.981 }' 00:21:35.981 [2024-05-14 23:08:48.299221] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:35.981 [2024-05-14 23:08:48.299350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94209 ] 00:21:36.239 [2024-05-14 23:08:48.439024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.239 [2024-05-14 23:08:48.525803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.497 [2024-05-14 23:08:48.674225] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.062 23:08:49 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:37.062 23:08:49 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:21:37.062 23:08:49 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:37.062 23:08:49 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:37.062 23:08:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.320 23:08:49 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:37.321 23:08:49 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:37.321 23:08:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:37.321 23:08:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:37.321 23:08:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:37.321 23:08:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:37.321 23:08:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.887 23:08:49 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:37.887 23:08:49 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:37.887 23:08:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:37.887 23:08:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:37.887 23:08:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:37.887 23:08:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:37.887 23:08:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.887 23:08:50 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:37.887 23:08:50 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:37.887 23:08:50 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:37.887 23:08:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:38.145 23:08:50 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:38.145 23:08:50 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:38.145 23:08:50 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.PNnyKSfr8l /tmp/tmp.MC8y8xs5T4 00:21:38.145 23:08:50 keyring_file -- keyring/file.sh@20 -- # killprocess 94209 00:21:38.145 23:08:50 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 94209 ']' 00:21:38.145 23:08:50 keyring_file -- common/autotest_common.sh@950 -- # kill -0 94209 00:21:38.145 23:08:50 keyring_file -- common/autotest_common.sh@951 -- # uname 00:21:38.145 23:08:50 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:38.145 23:08:50 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94209 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94209' 00:21:38.404 killing process with pid 94209 00:21:38.404 Received shutdown signal, test time was about 1.000000 seconds 00:21:38.404 00:21:38.404 Latency(us) 00:21:38.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.404 =================================================================================================================== 00:21:38.404 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@965 -- # kill 94209 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@970 -- # wait 94209 00:21:38.404 23:08:50 keyring_file -- keyring/file.sh@21 -- # killprocess 93704 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 93704 ']' 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@950 -- # kill -0 93704 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@951 -- # uname 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93704 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:38.404 killing process with pid 93704 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93704' 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@965 -- # kill 93704 00:21:38.404 [2024-05-14 23:08:50.753435] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:38.404 [2024-05-14 23:08:50.753476] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:38.404 23:08:50 keyring_file -- common/autotest_common.sh@970 -- # wait 93704 00:21:38.662 00:21:38.662 real 0m16.579s 00:21:38.662 user 0m42.304s 00:21:38.662 sys 0m3.158s 00:21:38.662 23:08:51 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:38.662 23:08:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:38.662 ************************************ 00:21:38.662 END TEST keyring_file 00:21:38.662 ************************************ 00:21:38.921 23:08:51 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:21:38.921 23:08:51 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:21:38.921 23:08:51 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:21:38.921 23:08:51 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:21:38.921 23:08:51 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:21:38.921 23:08:51 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:21:38.921 23:08:51 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:21:38.921 23:08:51 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:21:38.921 23:08:51 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:38.921 23:08:51 -- common/autotest_common.sh@10 -- # set +x 00:21:38.921 23:08:51 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:21:38.921 23:08:51 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:21:38.921 23:08:51 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:21:38.921 23:08:51 -- common/autotest_common.sh@10 -- # set +x 00:21:40.301 INFO: APP EXITING 00:21:40.301 INFO: killing all VMs 00:21:40.301 INFO: killing vhost app 00:21:40.301 INFO: EXIT DONE 00:21:40.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:40.867 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:40.867 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:41.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:41.691 Cleaning 00:21:41.691 Removing: /var/run/dpdk/spdk0/config 00:21:41.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:41.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:41.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:41.691 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:41.691 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:41.691 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:41.691 Removing: /var/run/dpdk/spdk1/config 00:21:41.691 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:41.691 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:41.691 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:41.691 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:41.691 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:41.691 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:41.691 Removing: /var/run/dpdk/spdk2/config 00:21:41.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:41.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:41.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:41.691 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:41.691 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:41.691 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:41.691 Removing: /var/run/dpdk/spdk3/config 00:21:41.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:41.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:41.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:41.691 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:41.691 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:41.691 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:41.691 Removing: /var/run/dpdk/spdk4/config 00:21:41.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:41.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:41.691 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:41.692 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:41.692 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:41.692 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:41.692 Removing: /dev/shm/nvmf_trace.0 00:21:41.692 Removing: /dev/shm/spdk_tgt_trace.pid60067 00:21:41.692 Removing: /var/run/dpdk/spdk0 00:21:41.692 Removing: /var/run/dpdk/spdk1 00:21:41.692 Removing: /var/run/dpdk/spdk2 00:21:41.692 Removing: /var/run/dpdk/spdk3 00:21:41.692 Removing: /var/run/dpdk/spdk4 00:21:41.692 Removing: /var/run/dpdk/spdk_pid59927 00:21:41.692 Removing: /var/run/dpdk/spdk_pid60067 00:21:41.692 Removing: /var/run/dpdk/spdk_pid60314 00:21:41.692 Removing: /var/run/dpdk/spdk_pid60407 00:21:41.692 Removing: /var/run/dpdk/spdk_pid60441 00:21:41.692 Removing: /var/run/dpdk/spdk_pid60556 00:21:41.692 Removing: /var/run/dpdk/spdk_pid60586 00:21:41.692 Removing: /var/run/dpdk/spdk_pid60704 00:21:41.692 Removing: /var/run/dpdk/spdk_pid60973 00:21:41.692 Removing: /var/run/dpdk/spdk_pid61150 00:21:41.692 Removing: /var/run/dpdk/spdk_pid61227 00:21:41.692 Removing: /var/run/dpdk/spdk_pid61313 00:21:41.692 Removing: /var/run/dpdk/spdk_pid61408 00:21:41.692 Removing: /var/run/dpdk/spdk_pid61441 00:21:41.692 Removing: /var/run/dpdk/spdk_pid61477 00:21:41.692 Removing: /var/run/dpdk/spdk_pid61538 00:21:41.692 Removing: /var/run/dpdk/spdk_pid61639 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62256 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62309 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62370 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62379 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62458 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62467 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62546 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62574 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62630 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62656 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62706 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62718 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62865 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62900 00:21:41.692 Removing: /var/run/dpdk/spdk_pid62969 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63025 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63050 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63108 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63143 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63177 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63212 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63248 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63277 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63312 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63346 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63381 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63414 00:21:41.692 Removing: /var/run/dpdk/spdk_pid63450 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63479 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63513 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63548 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63577 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63617 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63646 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63689 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63721 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63750 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63791 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63856 00:21:41.951 Removing: /var/run/dpdk/spdk_pid63947 00:21:41.951 Removing: /var/run/dpdk/spdk_pid64346 00:21:41.951 Removing: /var/run/dpdk/spdk_pid67645 00:21:41.951 Removing: /var/run/dpdk/spdk_pid67990 00:21:41.951 Removing: /var/run/dpdk/spdk_pid70431 00:21:41.951 Removing: /var/run/dpdk/spdk_pid70789 00:21:41.951 Removing: /var/run/dpdk/spdk_pid71020 00:21:41.951 Removing: /var/run/dpdk/spdk_pid71071 00:21:41.951 Removing: /var/run/dpdk/spdk_pid71917 00:21:41.951 Removing: /var/run/dpdk/spdk_pid71963 00:21:41.951 Removing: /var/run/dpdk/spdk_pid72324 00:21:41.951 Removing: /var/run/dpdk/spdk_pid72834 00:21:41.951 Removing: /var/run/dpdk/spdk_pid73280 00:21:41.951 Removing: /var/run/dpdk/spdk_pid74252 00:21:41.951 Removing: /var/run/dpdk/spdk_pid75214 00:21:41.951 Removing: /var/run/dpdk/spdk_pid75338 00:21:41.951 Removing: /var/run/dpdk/spdk_pid75401 00:21:41.951 Removing: /var/run/dpdk/spdk_pid76866 00:21:41.951 Removing: /var/run/dpdk/spdk_pid77099 00:21:41.951 Removing: /var/run/dpdk/spdk_pid77536 00:21:41.951 Removing: /var/run/dpdk/spdk_pid77650 00:21:41.951 Removing: /var/run/dpdk/spdk_pid77788 00:21:41.951 Removing: /var/run/dpdk/spdk_pid77819 00:21:41.951 Removing: /var/run/dpdk/spdk_pid77847 00:21:41.951 Removing: /var/run/dpdk/spdk_pid77892 00:21:41.951 Removing: /var/run/dpdk/spdk_pid78027 00:21:41.951 Removing: /var/run/dpdk/spdk_pid78155 00:21:41.951 Removing: /var/run/dpdk/spdk_pid78398 00:21:41.951 Removing: /var/run/dpdk/spdk_pid78502 00:21:41.951 Removing: /var/run/dpdk/spdk_pid78761 00:21:41.951 Removing: /var/run/dpdk/spdk_pid78867 00:21:41.951 Removing: /var/run/dpdk/spdk_pid78988 00:21:41.951 Removing: /var/run/dpdk/spdk_pid79331 00:21:41.951 Removing: /var/run/dpdk/spdk_pid79718 00:21:41.951 Removing: /var/run/dpdk/spdk_pid80022 00:21:41.951 Removing: /var/run/dpdk/spdk_pid80510 00:21:41.951 Removing: /var/run/dpdk/spdk_pid80516 00:21:41.951 Removing: /var/run/dpdk/spdk_pid80841 00:21:41.951 Removing: /var/run/dpdk/spdk_pid80861 00:21:41.951 Removing: /var/run/dpdk/spdk_pid80875 00:21:41.951 Removing: /var/run/dpdk/spdk_pid80910 00:21:41.951 Removing: /var/run/dpdk/spdk_pid80916 00:21:41.951 Removing: /var/run/dpdk/spdk_pid81214 00:21:41.951 Removing: /var/run/dpdk/spdk_pid81263 00:21:41.951 Removing: /var/run/dpdk/spdk_pid81593 00:21:41.951 Removing: /var/run/dpdk/spdk_pid81830 00:21:41.951 Removing: /var/run/dpdk/spdk_pid82334 00:21:41.951 Removing: /var/run/dpdk/spdk_pid82885 00:21:41.951 Removing: /var/run/dpdk/spdk_pid84249 00:21:41.951 Removing: /var/run/dpdk/spdk_pid84864 00:21:41.951 Removing: /var/run/dpdk/spdk_pid84866 00:21:41.951 Removing: /var/run/dpdk/spdk_pid86875 00:21:41.951 Removing: /var/run/dpdk/spdk_pid86946 00:21:41.951 Removing: /var/run/dpdk/spdk_pid87027 00:21:41.951 Removing: /var/run/dpdk/spdk_pid87113 00:21:41.951 Removing: /var/run/dpdk/spdk_pid87259 00:21:41.951 Removing: /var/run/dpdk/spdk_pid87355 00:21:41.951 Removing: /var/run/dpdk/spdk_pid87425 00:21:41.951 Removing: /var/run/dpdk/spdk_pid87506 00:21:41.951 Removing: /var/run/dpdk/spdk_pid87831 00:21:41.951 Removing: /var/run/dpdk/spdk_pid88504 00:21:41.951 Removing: /var/run/dpdk/spdk_pid89853 00:21:41.951 Removing: /var/run/dpdk/spdk_pid90053 00:21:41.951 Removing: /var/run/dpdk/spdk_pid90344 00:21:41.951 Removing: /var/run/dpdk/spdk_pid90654 00:21:41.951 Removing: /var/run/dpdk/spdk_pid91187 00:21:41.951 Removing: /var/run/dpdk/spdk_pid91198 00:21:41.951 Removing: /var/run/dpdk/spdk_pid91553 00:21:41.951 Removing: /var/run/dpdk/spdk_pid91711 00:21:41.951 Removing: /var/run/dpdk/spdk_pid91869 00:21:41.951 Removing: /var/run/dpdk/spdk_pid91967 00:21:41.951 Removing: /var/run/dpdk/spdk_pid92121 00:21:41.951 Removing: /var/run/dpdk/spdk_pid92226 00:21:41.951 Removing: /var/run/dpdk/spdk_pid92891 00:21:41.951 Removing: /var/run/dpdk/spdk_pid92926 00:21:41.951 Removing: /var/run/dpdk/spdk_pid92962 00:21:41.951 Removing: /var/run/dpdk/spdk_pid93215 00:21:41.951 Removing: /var/run/dpdk/spdk_pid93250 00:21:41.951 Removing: /var/run/dpdk/spdk_pid93281 00:21:41.951 Removing: /var/run/dpdk/spdk_pid93704 00:21:41.951 Removing: /var/run/dpdk/spdk_pid93739 00:21:41.951 Removing: /var/run/dpdk/spdk_pid94209 00:21:41.951 Clean 00:21:42.210 23:08:54 -- common/autotest_common.sh@1447 -- # return 0 00:21:42.210 23:08:54 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:21:42.210 23:08:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.210 23:08:54 -- common/autotest_common.sh@10 -- # set +x 00:21:42.210 23:08:54 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:21:42.210 23:08:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.210 23:08:54 -- common/autotest_common.sh@10 -- # set +x 00:21:42.210 23:08:54 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:42.210 23:08:54 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:42.210 23:08:54 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:42.210 23:08:54 -- spdk/autotest.sh@387 -- # hash lcov 00:21:42.210 23:08:54 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:42.210 23:08:54 -- spdk/autotest.sh@389 -- # hostname 00:21:42.210 23:08:54 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:42.468 geninfo: WARNING: invalid characters removed from testname! 00:22:14.633 23:09:21 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:14.633 23:09:25 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:16.532 23:09:28 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:19.065 23:09:31 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:21.698 23:09:33 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:24.987 23:09:36 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:27.521 23:09:39 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:27.521 23:09:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:27.521 23:09:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:27.521 23:09:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.521 23:09:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.521 23:09:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.521 23:09:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.521 23:09:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.521 23:09:39 -- paths/export.sh@5 -- $ export PATH 00:22:27.521 23:09:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.521 23:09:39 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:27.521 23:09:39 -- common/autobuild_common.sh@437 -- $ date +%s 00:22:27.521 23:09:39 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715728179.XXXXXX 00:22:27.521 23:09:39 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715728179.SZE5en 00:22:27.521 23:09:39 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:22:27.521 23:09:39 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:22:27.521 23:09:39 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:27.521 23:09:39 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:27.521 23:09:39 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:27.521 23:09:39 -- common/autobuild_common.sh@453 -- $ get_config_params 00:22:27.521 23:09:39 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:22:27.521 23:09:39 -- common/autotest_common.sh@10 -- $ set +x 00:22:27.521 23:09:39 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:22:27.521 23:09:39 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:22:27.521 23:09:39 -- pm/common@17 -- $ local monitor 00:22:27.521 23:09:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:27.521 23:09:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:27.521 23:09:39 -- pm/common@21 -- $ date +%s 00:22:27.521 23:09:39 -- pm/common@25 -- $ sleep 1 00:22:27.521 23:09:39 -- pm/common@21 -- $ date +%s 00:22:27.521 23:09:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715728179 00:22:27.521 23:09:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715728179 00:22:27.521 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715728179_collect-vmstat.pm.log 00:22:27.521 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715728179_collect-cpu-load.pm.log 00:22:28.457 23:09:40 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:22:28.457 23:09:40 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:28.457 23:09:40 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:28.457 23:09:40 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:28.457 23:09:40 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:28.457 23:09:40 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:28.457 23:09:40 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:28.457 23:09:40 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:28.457 23:09:40 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:28.457 23:09:40 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:28.457 23:09:40 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:28.457 23:09:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:28.457 23:09:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:28.457 23:09:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:28.457 23:09:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:28.457 23:09:40 -- pm/common@44 -- $ pid=95859 00:22:28.457 23:09:40 -- pm/common@50 -- $ kill -TERM 95859 00:22:28.457 23:09:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:28.457 23:09:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:28.457 23:09:40 -- pm/common@44 -- $ pid=95861 00:22:28.457 23:09:40 -- pm/common@50 -- $ kill -TERM 95861 00:22:28.457 + [[ -n 5147 ]] 00:22:28.457 + sudo kill 5147 00:22:28.466 [Pipeline] } 00:22:28.486 [Pipeline] // timeout 00:22:28.491 [Pipeline] } 00:22:28.511 [Pipeline] // stage 00:22:28.517 [Pipeline] } 00:22:28.539 [Pipeline] // catchError 00:22:28.556 [Pipeline] stage 00:22:28.560 [Pipeline] { (Stop VM) 00:22:28.584 [Pipeline] sh 00:22:28.862 + vagrant halt 00:22:33.064 ==> default: Halting domain... 00:22:39.651 [Pipeline] sh 00:22:39.930 + vagrant destroy -f 00:22:44.118 ==> default: Removing domain... 00:22:44.130 [Pipeline] sh 00:22:44.449 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:22:44.458 [Pipeline] } 00:22:44.476 [Pipeline] // stage 00:22:44.487 [Pipeline] } 00:22:44.503 [Pipeline] // dir 00:22:44.509 [Pipeline] } 00:22:44.525 [Pipeline] // wrap 00:22:44.532 [Pipeline] } 00:22:44.548 [Pipeline] // catchError 00:22:44.556 [Pipeline] stage 00:22:44.558 [Pipeline] { (Epilogue) 00:22:44.573 [Pipeline] sh 00:22:44.853 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:51.425 [Pipeline] catchError 00:22:51.427 [Pipeline] { 00:22:51.441 [Pipeline] sh 00:22:51.726 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:51.985 Artifacts sizes are good 00:22:51.993 [Pipeline] } 00:22:52.009 [Pipeline] // catchError 00:22:52.019 [Pipeline] archiveArtifacts 00:22:52.025 Archiving artifacts 00:22:52.174 [Pipeline] cleanWs 00:22:52.184 [WS-CLEANUP] Deleting project workspace... 00:22:52.184 [WS-CLEANUP] Deferred wipeout is used... 00:22:52.189 [WS-CLEANUP] done 00:22:52.191 [Pipeline] } 00:22:52.209 [Pipeline] // stage 00:22:52.213 [Pipeline] } 00:22:52.228 [Pipeline] // node 00:22:52.233 [Pipeline] End of Pipeline 00:22:52.263 Finished: SUCCESS